text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Reopen parameter regions in Two-Higgs Doublet Models
The stability of the electroweak potential is a very important constraint for models of new physics. At the moment, it is standard for Two-Higgs doublet models (THDM), singlet or triplet extensions of the standard model to perform these checks at tree-level. However, these models are often studied in the presence of very large couplings. Therefore, it can be expected that radiative corrections to the potential are important. We study these effects at the example of the THDM type-II and find that loop corrections can revive more than 50% of the phenomenological viable points which are ruled out by the tree-level vacuum stability checks. Similar effects are expected for other extension of the standard model.
I. INTRODUCTION
The discovery of a scalar boson at the Large Hadron Collider with a mass of around 125 GeV was a milestone for particle physics [1,2]. This state has all expected properties of the long searched-for Higgs boson, and all particles predicted by the standard model of particle physics (SM) have finally been found. Moreover, the measured mass itself lies in a particular interesting range: combining this information with the one of the measured top mass m t , one finds that the scalar potential of the SM becomes unstable at very high energies [3]. This is not a fundamental problem for the SM. The lifetime of the vacuum we are living in exceeds the age of the universe by many orders of magnitude because of the large separation of the two minima. As soon as extensions of the SM with more scalars are considered, new vacua much 'closer' to ours can appear. Thus, it is necessary to check which combinations of parameters in these models provide a stable or at least sufficiently long-lived potential with correct electroweak symmetry breaking (EWSB). In supersymmetric models it was already realised in the 80s that dangerous charge and colour breaking minima can occur in specific directions of the scalar potential [4][5][6][7][8][9][10][11][12][13][14]. In the recent years, these constrains were proven to be even too weak. Other dangerous minima were discovered with numerical methods [15][16][17][18][19][20][21][22][23][24][25] and the impact of loop and thermal corrections was analysed [15,16]. In contrast, the vacuum stability of non-supersymmetric models is still mainly checked at tree-level. For instance, the tree-level potential of two-Higgs doublet models (THDM) has been studied intensively in literature [26][27][28][29][30][31][32][33][34], and very compact conditions for the stability of the electroweak (ew) potential were found. These results where also generalised to other nonsupersymmetric model [35][36][37][38]. However, it is often not checked how robust these conditions are against radiative corrections. It is known from the minimal supersymmetric standard model (MSSM) that radiative corrections can have an impact on the vacuum stability, but often the conclusion 'stable' or 'unstable' doesn't change once a suitable renormalisation scale is chosen [16]. The reason is that in the MSSM all couplings in the scalar potential are O(g 2 ), i.e. moderately small. This must not be the case in THDMs: since often, masses and not couplings are chosen as input, in principle, any size of couplings can appear. Usually, the tree-level perturbativity constraints [39,40] are applied which filter out points with very large couplings 4π. Nevertheless, quartic couplings O(10) are not rare. Thus, large loop effects due to these huge couplings aren't surprising at all. As we will see, for a large fraction of points these corrections stabilise the potential. Only in a few cases they destabilise it. This is similar to what has been observed in a singlet extension and the inert THDM, see Refs. [41][42][43] This letter is organised as follows: in sec. II the chosen conventions for the THDM are summarised and the used methods to check vacuum stability at the tree-and looplevel are explained. In sec. III the numerical setup is presented and the overall impact of the loop corrections is discussed. We summarise in sec. IV
II. THDM AND VACUUM STABILITY
The scalar potential of a CP conserving THDM with softly broken Z 2 symmetry reads 1 After EWSB, the neutral components of the two Higgs states receive Vacuum expectation values (VEVs) as with v 2 1 + v 2 2 = v 246 GeV and tan β = v2 v1 . The mass spectrum consists of superposition of these gauge eigenstates, i.e. (φ 1 , φ 2 ) → (h, H), (σ 1 , σ 2 ) → (G, A) and (H + 1 , H + 2 ) → (G + , H + ). Here, G and G + are the Goldstone modes of the Z and W boson. The mixing in these sectors is fixed by tan β, while in the CP-even sector a rotation angle α defines the transition from gauge to mass eigenstates. In practical applications, one can trade the physical masses m h , m H , m A and m H + as well as tan α for the quartic couplings. The necessary relations are with t β = tan β and t α = tan α. This has the advantage that physical observables instead of Lagrangian parameters can be chosen as input. However, one needs to be careful since a randomly chosen set of masses could easily correspond to a problematic set of quartic couplings: for very large couplings perturbativity will be spoilt and also unitarity can be violated. Therefore, the first constraints which are usually applied are those for tree-level unitarity which, roughly spoken, remove points where combinations of λ's are larger than 8π. The next set of theoretical constraints are those for a stable vacuum. The tree-level conditions to prevent unbounded from below (UFB) directions in the potential are [50] while the condition to have no deeper vacua than the ew one is [51] −m 12 m 2 1 − These conditions involve the tree-level quartic couplings which are calculated from the chosen masses and angles. However, it is well known from the SM that for large field extension the tree-level potential gets unreliable. In this case one should consider the renormalisation group equation (RGE) improved potential where the parameters are replaced by their running, i.e. scale dependent, values. The running of the quartic coupling in the SM is dominated by the contributions from the top quark which let it run negative at very high scales. In the THDM, the one-loop β-functions for λ 1 and λ 2 are given by where the dots indicate subdominant contributions involving g 1 , g 2 . Thus, for large λ 3,4,5 the slope of the running will change, i.e. λ 1,2 increase with the energy scale. To exemplify this, we show in Fig. 1 the scale dependence of λ 1 in a toy example involving only λ 1 , λ 3 , Y t and g 3 . When starting with λ 1 = −1, the coupling becomes already positive below 1 TeV for λ 3 > 6. This points towards a stabilisation of the potential at not too high energies. Since the scale at which λ 1 changes its sign is not far from the ew scale, an one-loop fixed order calculation can be expected to catch the dominant effects. Therefore, we will consider in the following also the one-loop effective potential Here, V CT is the counter-term (CT) potential which is discussed below. The Coleman-Weinberg potential V (1) CW is given by [52] V (1) with r i = 1 for real bosons, otherwise 2; C i = 3 for quark, are usually chosen to cancel all loop corrections to the masses and angles, i.e. the input values are the on-shell ones. One can derive a suitable set of CTs from the renormalisation conditions Here, T CT i and M 2,CT are the first and second derivative of the CT potential, and t i and Π are the loop corrections to the one-and two-point functions. The crucial point is that the derived CTs depend on the ew VEVs, i.e. they give a cancellation between V CT and V CW only at the ew minimum, but not at other positions of the potential. Having all the machinery at hand, we can compare now the results for the tree-level, RGE improved 2 and the one-loop effective potential. This is done in Fig. 2 for two points which suffer from UFB directions at tree-level in the direction v 1 = 0, v 2 → ∞. We see that the loop corrections have as expected a clear impact on the shape of the potential. In the first example, the value of λ 2 is -1 and the other quartic couplings are not large enough to stabilise the potential in the direction of v 2 . In contrast, in the second example with λ 2 = −0.2 the UFB direction disappears at the loop-level and the point becomes absolutely stable. Of course, also the situation is possible that the UFB direction disappears at the loop level, but new minima appear which are deeper than the ew one. Similarly, we find that the tree-level check for deeper minima than the ew one, eq. (10), can lead to a wrong conclusion about the stability of a point. We show at one example in Fig. 3 how significantly the shape of the scalar potential can change when going to the loop level. We see that the two global minima, which are at tree-level 25% deeper than the ew one, have completely disappeared at the one-loop level. Similarly, one can find also the opposite: points which look stable at the tree-level become metastable at the loop-level. It is now interesting to see how big the fraction of points is where the conclusion about the stability changes at the loop-level.
III. RESULTS
As we have seen, loop corrections can be very important to judge the stability of the THDM. Therefore, we are going to check now how often this can happen in a common parameter scan. For that purpose, we use Vevacious [56] to test the stability of the one-loop effective potential. We have generated the necessary model files with SARAH 3 . We also used SARAH to generate a SPheno module [57,58] for the THDM which use the masses and tan α as input. SPheno automatically translate this input into the tree-level couplings. In addition, we have modified the code to calculate also the CTs for the λ's which are necessary to keep the loop masses to their tree-level values. This information is then passed to Vevacious to check the vacuum stability of the one-loop effective potential. As data sample we have generated 400,000 points using the following parameter ranges 4 : 200 GeV < m H , m A < 1000 GeV 500 GeV < m H + < 1000 GeV − 10 6 GeV 2 < m 12 < 0 − 1 < tan α < 0, 1 < tan β < 1.5 Afterwards, points are discarded which violate the tree-level unitarity limits or which fail the HiggsBounds checks [59,60]. The remaining 22,395 points can be categorised as shown in Tab. I. Thus, more than half of the points which are ruled out by the tree-level UFB checks are valid at the loop-level. In general, there is a correlation between the size of the quartic couplings and the mass splitting between m H , m A and m H + . Consequently, we find also a correlation between the maximal splitting between the heavy Higgs state and the size of λ 1,2 which can be stabilised via loop corrections 5 . This is shown in Fig. 4 where the misidentification rate r is given as function of min(λ 1 , λ 2 ) and the maximal mass difference. r gives for the UFB and metastability check the ratio of points for which the result 'unstable' changes to 'stable' at the loop-level, while for stable tree-level points it is vice versa. For small (> −0.2) negative values of λ 1,2 , we find r ≈ 1 for the entire range of mass differences. Only for a small island with very large mass differences points stay unstable at the loop-level. These points have in common that λ 3 is O(4π), i.e. the loop corrections might not be under control any more. If we would have applied a stronger cut on |λ i | as it might be necessary to really keep perturbativity under control [61], e.g. |λ i | < 2π, this island wouldn't appear and r = 1 would hold up to min(λ 1 , λ 2 ) −0.15. To test the usefulness of the check for metastability, the sample of points is significantly lower than for UFB, i.e. there is a non-negligible uncertainty in the misidentification rate. Nevertheless, the obtained results suggest that in most cases it rules out points which are viable. On the other side, the fraction of points which is stable at tree-level but becomes unstable at loop-level is quite low. We show the distribution of points without UFB directions in Fig. 5. One can see that only for small |m 12 | and large mass differences between the heavy Higgs states, a point stable at tree-level can become unstable at loop-level.
Up to now, we have only studied the overall stability of the potential. However, even a meta-stable vacuum is viable as long as the life-time exceeds the age of the universe. We have checked the points with deeper minima using the code CosmoTransition [62] and found that the majority of points has a comparable short life-time. Only in about 5% of the cases, the tunnelling rate is sufficiently small to consider these points long-lived at zero temperature. If thermal corrections are included, the fraction of long-lived points shrinks to 1%.
IV. CONCLUSION
We have studied in this letter the effects of radiative corrections to the vacuum stability conditions in THDMs. In these models large quartic couplings appear if large mass differences between the heavy Higgs states are considered. These large couplings cause important loop correction to the scalar potential. As consequence, we found that a large fraction of points which is ruled out by tree-level conditions are revived at the loop level. This happened in more than 50% of the cases for points failing the standard UFB checks, and even in more than 90% of the cases for the tree-level metastability check. Because of the importance of the UFB checks, more than 40% of all phenomenological viable points are misidentified at tree-level. If no checks for vacuum stability would have been applied at all, the fraction of wrong points would be only ∼ 30% for the considered dataset. Because of these large misidentification rates, it seems necessary to push the standards of these theoretical constraints beyond the tree-level. It is also very likely that similar results would be found for other non-supersymmetric models if quartic couplings 1 are used. | 3,459.4 | 2017-05-10T00:00:00.000 | [
"Physics"
] |
Obeticholic Acid for Primary Biliary Cholangitis
Primary biliary cholangitis (PBC) is a rare autoimmune cholestatic liver disease that may progress to fibrosis and/or cirrhosis. Treatment options are currently limited. The first-line therapy for this disease is the drug ursodeoxycholic acid (UDCA), which has been proven to normalize serum markers of liver dysfunction, halt histologic disease progression, and lead to a prolongation of transplant-free survival. However, 30–40% of patients unfortunately do not respond to this first-line therapy. Obeticholic acid (OCA) is the only registered agent for second-line treatment in UDCA-non responders. In this review, we focus on the pharmacological features of OCA, describing its mechanism of action of and its tolerability and efficacy in PBC patients. We also highlight current perspectives on future therapies for this condition.
Introduction
Primary biliary cholangitis (PBC) is a chronic disease characterized by the accumulation of bile acids in the liver, potentially progressing to cirrhosis, end-stage liver disease, hepatocellular carcinoma, and even death [1]. The existence of gender differences in PBC development has been widely reported. Indeed, PBC develops more frequently in females than males [1]. In the global population, a prevalence of 14.6 cases per 100,000 people has been observed, with a female:male ratio of 9:1, and 1.76 new cases diagnosed per 100,000 people each year [2]. Due to more careful routine testing and/or incompletely understood changes in environmental factors, the definition and outcome of PBC have been reconsidered over the last 30 years, from a severe symptomatic disease characterized by symptoms of portal hypertension to a milder disease with a long natural history [3]. As a consequence, many patients are asymptomatic, and most new diagnoses (up to 60%) are made after the discovery of increased serum biochemical markers of liver function during check-ups performed for unrelated purposes [4,5]. This autoimmune cholestatic disease is characterized by increased plasma levels of alkaline phosphatase (ALP) and the presence of a high titer of antimitochondrial antibodies (AMAs) in over 90% of patients, as well as a PBC-specific anti-nuclear antibody (ANA). The current EASL guidelines suggest that a diagnosis of PBC can be determined in adult patients in the presence of cholestasis and the absence of other systemic diseases, when the ALP value is elevated and AMAs are present with a titer >1:40 [6].
Ursodeoxycholic acid (UDCA) represents the gold standard for PBC therapy, and it is generally administered as a daily oral treatment (recommended dose: 13-15 mg/kg) [6]. UDCA therapy improves liver transplantation (LT)-free survival in PBC patients, including those with early and advanced disease, and also in patients who did not meet the accepted criteria for UDCA response [7]. Even though the improvement of biochemical parameters after UDCA treatment is modest, patients experience a long-term benefit in terms of improved survival. Regardless, non-responders represent 30-40% of all UDCA-treated patients, and globally have a higher risk of PBC progression and a greater need for transplant than responder patients, as well as a higher mortality [8]. A young age at diagnosis and male sex have been associated with a reduced chance of biochemical response to UDCA therapy in a large cohort study from the UK-PBC study group [9]. Accordingly, another large, multicenter long-term follow-up study (n = 4355) found that young PBC patients (aged <45) had significantly lower response rates to UDCA than their older counterparts (aged >65) [10]. However, the biological mechanisms underpinning this clinical observation in non-responders to UDCA are far from completely understood. Therefore, the proposal of a second-line therapy devoted to UDCA non-responders provides the rationale to overcome the observed limitations of drug efficacy. To date, obeticholic acid (OCA) represents the only second-line treatment recommended for nonresponder PBC patients, which are intolerant to UDCA therapy or in whom a 12 monthtreatment haven't produced benefit. As demonstrated by clinical trials, including the phase III POISE study described in detail below, OCA is effective in improving the serum and histological endpoints of PBC patients in monotherapy. In this review, we focus on the mechanism of action of OCA and its tolerability and efficacy in PBC, and offer a perspective on the future treatment of this condition.
Pharmacological Actions of OCA
OCA, a synthetic derivative of the bile acid (BA) chenodeoxycholic acid, is an agonist of the farnesoid X receptor (FXR) [11], a key nuclear receptor mainly expressed in the liver and gut, which orchestrates complex signaling pathways related to the homeostasis of bile acids (BAs) (Figure 1). In vitro pharmacological studies have demonstrated that OCA is an FXR agonist with a potency 100 times higher than endogenous BAs [12]. BA synthesis occurs in the liver starting from hepatic cholesterol. After their synthesis, BAs are secreted into the gut to help digestion and consequently the absorption of nutrients, in particular lipids and liposoluble vitamins, by virtue of their emulsifying ability [13]. After their secretion, about 95% of BAs are reabsorbed from the terminal ileum, thus entering into the enterohepatic circulation. As FXR agonists, BAs themselves participate in the finely tuned regulation of their own synthesis and secretion through the modulation of FXR activation. In PBC-related cholestasis, the enterohepatic circulation of BAs is impaired, leading to hepatic inflammation and damage.
icines 2022, 10, x FOR PEER REVIEW 2 of 11 treated patients, and globally have a higher risk of PBC progression and a greater need for transplant than responder patients, as well as a higher mortality [8]. A young age at diagnosis and male sex have been associated with a reduced chance of biochemical response to UDCA therapy in a large cohort study from the UK-PBC study group [9]. Accordingly, another large, multicenter long-term follow-up study (n = 4355) found that young PBC patients (aged <45) had significantly lower response rates to UDCA than their older counterparts (aged >65) [10]. However, the biological mechanisms underpinning this clinical observation in non-responders to UDCA are far from completely understood. Therefore, the proposal of a second-line therapy devoted to UDCA non-responders provides the rationale to overcome the observed limitations of drug efficacy. To date, obeticholic acid (OCA) represents the only second-line treatment recommended for non-responder PBC patients, which are intolerant to UDCA therapy or in whom a 12 monthtreatment haven't produced benefit. As demonstrated by clinical trials, including the phase III POISE study described in detail below, OCA is effective in improving the serum and histological endpoints of PBC patients in monotherapy. In this review, we focus on the mechanism of action of OCA and its tolerability and efficacy in PBC, and offer a perspective on the future treatment of this condition.
Pharmacological Actions of OCA
OCA, a synthetic derivative of the bile acid (BA) chenodeoxycholic acid, is an agonist of the farnesoid X receptor (FXR) [11], a key nuclear receptor mainly expressed in the liver and gut, which orchestrates complex signaling pathways related to the homeostasis of bile acids (BAs) (Figure 1). In vitro pharmacological studies have demonstrated that OCA is an FXR agonist with a potency 100 times higher than endogenous BAs [12]. BA synthesis occurs in the liver starting from hepatic cholesterol. After their synthesis, BAs are secreted into the gut to help digestion and consequently the absorption of nutrients, in particular lipids and liposoluble vitamins, by virtue of their emulsifying ability [13]. After their secretion, about 95% of BAs are reabsorbed from the terminal ileum, thus entering into the enterohepatic circulation. As FXR agonists, BAs themselves participate in the finely tuned regulation of their own synthesis and secretion through the modulation of FXR activation. In PBC-related cholestasis, the enterohepatic circulation of BAs is impaired, leading to hepatic inflammation and damage. Figure 1. Molecular mechanism of hepatic OCA pharmacodynamics. OCA activates FXR, thereby triggering cellular pathways leading to a reduction in the synthesis and hepatic uptake of BAs, and an increase in their efflux from the liver. Furthermore, OCA acts on LSEC and KC, exerting anti-inflammatory and antifibrotic effects by reducing the production of proinflammatory Figure 1. Molecular mechanism of hepatic OCA pharmacodynamics. OCA activates FXR, thereby triggering cellular pathways leading to a reduction in the synthesis and hepatic uptake of BAs, and an increase in their efflux from the liver. Furthermore, OCA acts on LSEC and KC, exerting anti-inflammatory and antifibrotic effects by reducing the production of proinflammatory cytokines and HSC activation, respectively. Abbreviations: farnesoid X receptor (FXR), retinoid X receptor (RXR), bile acid (BA), Kupffer cell (KC), liver sinusoidal endothelial cell (LSEC), hepatic stellate cell (HSC), small heterodimer partner (SHP), liver receptor homolog 1 (LRH-1), fibroblast growth factor-19 , sodium taurocholate co-transporting polypeptide (NTCP), bile salt export pump (BSEP), multidrug resistance protein-3 (MDR3), organic solute transporters (OST), transforming growthfactor β (TGFβ), connective tissue growth factor (CTGF), platelet-derived growth factor β-receptor (PDGFR-β), monocyte chemo-attractant protein-1 (MCP1), nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB), inhibitor of kB (IκB).
Similar to other nuclear receptors [14,15], upon activation, FXR binds to the retinoid X receptor (RXR). The binding of the FXR-RXR heterodimer to DNA responsive elements results in the induction of the small heterodimer partner (SHP) gene, finally causing the transcriptional repression of rate-limiting enzymes in BA synthesis, such as cytochrome P450 (CYP)7A1 and liver receptor homolog 1 (LRH-1) [16]. LRH-1 is a transcription factor with a key role in the regulation of BA and cholesterol homeostasis, and also in coordinating a panel of other hepatic metabolic processes [17]. In addition, FXR stimulates the synthesis of fibroblast growth factor-19 , which in turn participates in the inhibition of CYP7A1 and CYP8B1 expression through the fibroblast growth factor receptor-4 (FGFR4) pathway in hepatocytes [18]. As a result, the above-described FXR/SHP and FXR/FGF19/FGFR4 pathways are major negative regulators of BA synthesis. Furthermore, FXR inhibits the sodium taurocholate co-transporting polypeptide (NTCP) via SHP, thereby repressing hepatic BA uptake [19]. FXR activation also increases the efflux of BAs from the liver to the canalicular lumen by targeting the transporter bile salt export pump (BSEP) and multidrug resistance protein-3 (MDR3), triggering another mechanism responsible for the anticholestatic effects of FXR agonists [20]. FXR activation also leads to an increase in the expression of the organic solute transporters OSTα and β, which also enhance BA efflux from the liver to the portal vein [21]. Besides its pivotal activity as a BA-responsive transcription regulator of BA synthesis and metabolism, as described in detail above, it has been demonstrated that FXR-mediated signaling plays a role in hepatic fibrogenesis, although controversial results have been obtained regarding this function. Hence, it has been observed that FXR knock-out mice develop hepatic inflammation, fibrosis, and liver tumors over time [22] and, accordingly, it has been demonstrated that OCA-induced FXR activation reduced liver fibrosis in two different experimental in vivo models of liver fibrosis [23]. Other authors have suggested that FXR in liver fibrosis models can be either detrimental or irrelevant, depending on the type of damage [24]. Notably, no direct effects of FXR agonists could be observed on the activation of cultured hepatic stellate cells (HSCs) [25,26], which are the main cell types triggering the fibrogenesis process [27].
OCA exerted both anti-inflammatory and ant-fibrotic effects by targeting the activation of both liver sinusoidal endothelial cells (LSECs) and Kupffer cells [26]. In particular, OCA reduces the production of inflammatory cytokines and chemokines (transforming growth-factor β, connective tissue growth factor, platelet-derived growth factor β-receptor, monocyte chemo-attractant protein-1) by these two types of sinusoidal cells, which in turn activate HSCs [28]. Hence, the mechanism of the anti-inflammatory effect relies on the inhibition of the NF-κB signaling pathway via the up-regulation of its inhibitor IκBα. In summary, OCA acts by a complex mechanism, comprising several actions: (a) the regulation of bile acid transport; (b) the reduction in inflammation; (c) the modulation of cellular pathways triggering fibrogenesis [29]. Due to the induction of a signaling pathway which modulates the activity of fibroblast growth factor-19 (FGF-19), OCA exerts greater hepatoprotection than UDCA. OCA also induces the expression and secretion of gut-derived hormones, e.g., FGF-19 [30]. This hormone is absorbed and secreted by enterocytes into the portal blood, thereby reaching the liver through the portal venous system. In the liver, FGF-19 is involved in the anticholestatic mechanisms described above.
Pre-Registration Studies
OCA has been evaluated in monotherapy in a phase II study in which PBC patients were enrolled with the aim of assessing its benefit in the absence of UDCA treatment [31]. After randomization, patients were treated with a placebo (23 patients), or two doses of OCA (10 mg in 20 patients and 50 mg in 16 patients) for 3 months, and followed up by a 6-year open-label extension. The ALP reduction, measured as the percentage difference from the baseline, was evaluated as the primary endpoint of this study. The treatment with both dosages induced a significant ALP reduction compared to the placebo. Accordingly, other plasma parameters were reduced in OCA-treated patients, e.g., conjugated bilirubin, GGT, AST, and immunoglobulins. In this study, the most common adverse effect reported after OCA treatment was pruritus, having been experienced by 15% of the 10 mg-treated patients and 38% of the 50 mg-treated patients.
The first approval of OCA was obtained following the results of a phase III trial that enrolled 216 patients [32], and demonstrated that about 59% of UDCA-non-responders benefitted from a one-year treatment with a combination of OCA and UDCA. These patients reached the clinical endpoint, set as an ALP level of less than 1.67 times the upper limit of the normal range, with a reduction of at least 15% from the baseline). Thereafter, the study underwent an open-label extension phase in which 193 enrolled patients were switched to OCA treatment [33]. The results of the following 3-year interim analysis showed that OCA therapy was well tolerated and could be demonstrated to maintain its performance over time. Additionally, a post-hoc analysis revealed that OCA induced a significant bilirubin reduction (both total and direct) that was particularly evident in those patients with a high baseline value of direct bilirubin [34]. This analysis thus confirmed the beneficial effects of OCA therapy in high-risk patients. Furthermore, the histological analysis of liver biopsies at baseline and after a 3-year treatment with OCA in a subgroup of patients (n = 17) revealed the improvement or stabilization of a panel of histologic disease features, e.g., ductular injury, fibrosis, and collagen morphometry [35]. This analysis, despite the limited number of assessed liver biopsies, further demonstrated that OCA is effective in UDCA-non-responders. The most reported adverse effects related to OCA treatment were pruritus and fatigue, which were experienced by 77% and 33% of patients, respectively [34]. As regards pruritus, only 8% of the OCA-treated patients interrupted the treatment during the open-label extension phase and, in general, patients reported a mild-to-moderate pruritus, and those experiencing severe pruritus were treated with specific medication after a clinical consult. In general, the results of this clinical trial demonstrate that 3 years of OCA treatment were efficient in ameliorating or stabilizing multiple histological features of PBC in most patients with an inadequate UDCA response, and supported the approval of OCA from the FDA in 2016.
Another sub-analysis of the above-reported trial observed that OCA treatment induced a significant reduction in the AST to platelet ratio (APRI). This effect was observed after a 1-year treatment and in the open-label extension phase in the groups treated with 10 and 50 mg OCA with respect to the placebo [36]. Liver stiffness (LS) was evaluated in 39 patients randomized and dosed with the placebo, 35 patients dosed with OCA 5-10 mg, and 32 patients dosed with OCA 10 mg. LS at baseline was 12.7 ± 10.7, 10.7 ± 8.6, and 11.4 ± 8.2 kPa, respectively. During the double-blind and open-label phases, a decrease, while not significant, was only observed in the OCA 10 mg group, while both the OCA 5-10 mg and placebo groups displayed mean increases in liver stiffness [36]. In other words, a trend towards a reduction in LS was observed only in the arm treated with the highest dose of OCA. In another scenario, namely non-alcoholic steatohepatitis, patients enrolled in the phase III REGENERATE study with OCA showed a significant reduction in LS after 18 months in the OCA 25 mg group vs. the placebo [37]. Thus, the assessment of the antifibrotic activity of OCA in a clinical setting has several limitations, mainly considering that changes in LS occur during a median interval of 2 years.
The main pre-registration studies evaluating the efficacy and safety of OCA are reported in Table 1.
Real-World Data on OCA
Currently, OCA is available as tablets containing 5 and 10 mg under the brand name Ocaliva. Typically, therapy for PBC patients is started with the administration of an initial dose of 5 mg once daily, which can be titrated to a maximum of 10 mg daily [40]. The general recommendation for patients with advanced cirrhosis (Child-Pugh B or C) is to start with a dose of 5 mg once weekly, which is then increased to a maximum of 10 mg twice weekly if the drug is well-tolerated.
The most significant ADRs caused by OCA therapy which have been reported in clinical trials are pruritus, fatigue, nausea, and headache. To a minor extent, hypersensitivity reactions and depression have also been observed [40]. As far as pruritus is concerned, it appears to be less severe if the patients are initially treated with a low dose, which can then be gradually increased. As a consequence of the alteration of lipid metabolism, which is due to other molecular signaling pathways triggered by FXR activation, an increase in total serum lipid levels and a small decrease in high-density lipoprotein (HDL) have also been reported in PBC patients treated with OCA, but to date these effects have not been correlated to a long-term increased cardiovascular risk [30].
Real-world data are crucial for understanding treatment effectiveness and safety in everyday clinical practice where: (i) patients' characteristics are more heterogeneous with respect to sub-phenotypes, e.g., cirrhosis and overlap syndrome between PBC and AIH; (ii) the treatment schedule may be less rigid and more "personalized" by each treating physician. A number of post-registration clinical trials are ongoing and recruiting patients ( Table 2). Three real-world cohorts have been published thus far (Table 3), all reporting results for 12 months of OCA treatment [41][42][43]. Altogether, 375 patients treated with OCA were included in these three studies. The main characteristics of the three cohorts are respectively described in Table 3. The inclusion criteria were: hepatologist's discretion for the Canadian cohort, lack of response to Paris II criteria [44] for the Iberian cohort and ALP >1.5 times the normal according to the Italian Medicines Agency (AIFA) for the Italian cohort. The percentages of patients with cirrhosis were 6.3, 10, and 15%. The percentages of response at 12 months according to the POISE criteria were respectively 18, 29.5, and 51.9%. Due to the retrospective design of these studies, a comparable evaluation of the response to OCA is impossible. However, it has to be pointed out that in the Italian cohort, with one third of cirrhotic patients, the response rate was lower due to the higher drop-out and higher levels of bilirubin at baseline in cirrhotic patients. Within the Canadian cohort, 11 patients (17%) had a permanent discontinuation of treatment (2 of them with Child-Pugh A and B respectively) for suspected hepatotoxicity. The first case was a 67-year-old female who discontinued OCA due to an increase in ALP. The second patient was a 54-year-old female who developed severe cholestatic cirrhosis, who was transplanted for severe complications. Within the Iberian cohort, a total of 14 patients (11.67%) discontinued the treatment due to severe adverse events or decompensation of cirrhosis. Within the Italian cohort, 33 patients (17%) discontinued OCA for pruritus or other side-effects. In the same cohort, factors associated with a lack of response at 12 months were: previous treatment with fibrates, high levels of ALP at baseline, and high levels of bilirubin at baseline [43]. A further analysis was performed in 100 cirrhotic patients of the Italian cohort [45]. The response to treatment was obtained in 41% of cases, according to the POISE criteria, confirming OCA efficacy at this stage as well. In this case, the use of the normal range criteria means that the endpoint was reached by only 11.5% of the cirrhotic patients. Regarding the reported severe adverse effects, 22% of patients discontinued OCA therapy: 5 patients due to jaundice and/or ascitic decompensation, 4 due to upper digestive bleeding, and 1 subject died after the substitution of a transjugular intrahepatic portosystemic shunt.
A sub-analysis from the Italian and Iberian cohorts found that patients with PBC/AIH overlap syndrome had a similar response after OCA treatment [42,43].
Two further real-world studies were presented at an AASLD virtual meeting in 2020. The first study, derived from the GLOBAL PBC group, enrolled 290 patients in 11 centers located between Europe, North America, and Israel [46]. Among them, 215 patients met the POISE criteria for eligibility, 60 patients possessed available biochemical data for a period of 12 months, and 35% of patients reached the pre-defined POISE primary endpoint after 1 year of treatment. The second study was conducted on 319 patients that received OCA therapy between May 2016 and September 2019, and were considered eligible for OCA according to laboratory databases and American administrative claims [47]. According to the Toronto criteria, the proportion of patients achieving a biochemical response to the treatment was 48% after 1 year, 58% after 2 years, and 55% after 3 years which marked the end of the follow-up period [48]. More recently, a large nationwide experience of secondline therapy in PBC has been reported [49]. The study was conducted from August 2017 to June 2021 across 14 centers in the UK. A total of 457 PBC patients with an inadequate response to UDCA were recruited. Overall, 259 patients received OCA and 80 received fibrates (fibric acid derivatives) and completed 12 months of therapy, yielding a dropout rate of 25.7% and 25.9%, respectively. Treatment efficacy was quantified by the proportion of patients attaining a biochemical response according to propensity score matching. The 12-month biochemical response rates were 70.6% with OCA and 80% under fibric acid treatment, without reaching any statistical significance.
With the objective of evaluating the time to first occurrence of liver transplant or death, OCA-treated patients in the POISE trial and open-label extension were compared with non-OCA-treated external controls [50]. Propensity scores were generated for external control patients meeting POISE eligibility criteria from 1381 patients in the Global PBC registry study and 2135 in the UK PBC registry. Over the 6-year follow-up, patients treated with OCA had a significantly greater transplant-free survival than comparable external control patients.
Combined Therapy with OCA and Fibrates
Fibrates, well-known agents with anti-lipidemic properties, were proposed as a secondline treatment because their beneficial effects on inflammation, cholestasis, and fibrosis are documented, resulting from their activity as peroxisome proliferator-activated receptor (PPAR) agonists. Fibrates have different affinities to the three main PPAR isoforms, PPARα, PPARβ/δ, and PPARγ, and consequently can activate different signaling pathways. As an example, fenofibrate, a PPARα agonist, upon binding to its receptor, increases the expression of multidrug resistance protein 3 (MDR3) [51]. Furthermore, it increases biliary phosphatidylcholine secretion, thus ameliorating a recognized biomarker of cholestasis. Bezafibrate acts as a dual agonist of PPARα and PPARγ and is also a pregnane X receptor (PXR) agonist [52]. The BEZURSO trial is a Phase III study, employing bezafibrate in combination with UDCA, and was the first placebo-controlled trial evaluating the use of fibrates as a second-line treatment for PBC. In this study, the second-line combination therapy of bezafibrate and UDCA was effective in obtaining a complete biochemical response with a rate significantly higher than that observed in patients treated with a placebo and UDCA [53]. This regression was associated with a concurrent improvement of both symptoms and surrogate markers of liver fibrosis. The most frequently reported ADRs of fibrates include increased levels of creatinine and transaminases and heartburn. As a consequence of its main mechanism of action involving a reduction in BA synthesis, clofibrate treatment can lead to the formation of gallstones and hypercholesterolemia [54], two events which have not been observed during treatment with fenofibrate or bezafibrate.
A triple therapy with UDCA, OCA, and fibrates was studied in a multicenter retrospective cohort of patients with PBC [55]. Fifty-eight patients were treated with a combination of UDCA (13-15 mg/day), OCA (5-10 mg/day), and fibrates (fenofibrate 200 mg/day or bezafibrate 400 mg/day). This combination achieved a significant reduction in ALP level compared to dual therapy (odds ratio for ALP normalization of 5.5). The primary outcome (change in ALP) and the effect on pruritus are summarized in Table 4.
Conclusions
In May 2021, the Food and Drug Administration issued a new warning restricting the use of OCA in patients with advanced cirrhosis (https://www.fda.gov/drugs/drug-safetyand-availability/due-risk-serious-liver-injury-fda-restricts-use-ocaliva-obeticholic-acid-pr imary-biliary-cholangitis, accessed on 1 September 2022). Advanced cirrhosis was defined on the basis of current or prior evidence of liver decompensation (e.g., encephalopathy, coagulopathy) or portal hypertension (e.g., ascites, gastroesophageal varices, or persistent thrombocytopenia). A practical guidance statement was published thereafter by the AASLD [56]. In this statement, the AASLD reported the contraindication on cirrhosis announced by the FDA, namely decompensated cirrhosis, and further recommended the careful monitoring of any patient with cirrhosis, even if not advanced, receiving OCA. In eligible patients, the recommended starting dose of OCA is 5 mg, which can be titrated to 10 mg after 6 months if OCA is well-tolerated. It is also recommended by the AASLD to monitor liver function before and after the initiation of OCA therapy.
In conclusion, due to its complex and fascinating mechanism, OCA represents a complete intervention for the therapeutic management of those PBC patients who can-not be treated satisfactorily with UDCA for efficacy or safety reasons. However, more real-world data are needed to gain a full understanding of its pharmacological and toxicological features. | 5,993.4 | 2022-10-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
A systematic survey of multiple social robots as a passive- and interactive-social medium
This paper presents a systematic survey of the existing research in the literature on multiple social robots, which are a social medium whose behaviours are designed to interact with people in both direct and indirect ways. This medium uses multiple social robots that interact with each other. This paper explored academic databases (IEEE Xplore, ACM, PubMed, Science Direct, Springer, and Google Scholar) to locate publications from the last five years: January 2018 to June 2023. I found 59 papers on social robots with passive- and interactive-social-medium approaches and categorised and summarised them. These works discuss typical research topics and possible future avenues related to the interaction design of multiple social robots. GRAPHICAL ABSTRACT
Introduction
The use of multiple robots in various applications is rapidly expanding.Sophisticated designs for robot-robot interactions, such as those in autonomous driving [1,2] and warehouse transport robots [3,4], have fuelled the development of numerous technologies that facilitate safer and more accurate cooperation between robots.These technologies, which largely rely on advanced sensing and network-based communication capabilities, emphasise the importance of precise object recognition [5,6], localisation [7,8], and human motion trajectory prediction technologies [9,10].
Interaction design is becoming even more critical in social interaction where multiple robots collaborate.Past survey papers that focused on multiple agents [11,12], including social robots, reported the effectiveness of using multiple robots as a group of human teams and how the robot's behaviours influenced the observers' actions during the interactions.These survey papers summarised interaction styles between people and robots by considering interaction graphs and their relationships, but focused less on behaviours that design concepts between the robotselves.A key aspect of the social interaction of multiple robots is the concept of viewing robots as a medium.This concept, known as the passive-social/interaction medium [13], posits that using multiple robots leads to more effective information by providing a medium (detailed definitions are described in the next section).This concept is based CONTACT Masahiro Shiomi<EMAIL_ADDRESS>the understanding that humans, as social beings, are influenced by the number of others in their environment [14][15][16].Such social influences are also caused by multiple artificial agents (including robots) on human behaviours and impressions [12,17].Showing conversation between multiple robots would support the understanding of people who are observing robots, because dialogues are easier to understand than monologue [18,19].Moreover, expressing social relationships among multiple social robots will increase not only the amount of information but also influence the cognitive aspects.For example, past studies reported that friendly relationships between humans, animals, objects, and robots increased positive impressions toward observers [20][21][22].In other words, using multiple robots enables them to manifest their sociality on their own without human interactions.
However, no systematic investigation has concentrated on such effects, particularly in the context of a passive/interactive-social medium.Past surveys on multiple robots have largely focused on non-social aspects, e.g.swarm robot navigation that does not involve humans.While survey papers have focused on multiple agents, most merely highlight representative examples [11] or downplay the concept of the passive/interactivesocial medium [12].
This study fills this gap by conducting a systematic review from the perspective of the passivesocial/interaction medium and examines a wide range of sources, including academic articles, focusing on studies from 2018 to the present to ensure relevance and topicality.
Passive/interactive-social medium
The concept of passive/interactive-social medium using robot(s), based on a previous work [13], is depicted in Figure 1.Sakamoto et al. categorised social media into four types: passive, interactive, passive-social, and interactive-social.A passive medium indicates that one robot offers information as a medium without any interaction between the robot and users, i.e. a oneway information-providing system where people only observe a robot's behaviour.An interactive medium, on the other hand, indicates that one robot passively offers information like a medium.A key difference in the interaction style between passive and interactive media is that the robots change their verbal/non-verbal behaviours depending on such human behaviours as their locomotion and interruption toward the robot.Therefore, in an interactive medium, a robot's behaviours during the task change based on human behaviours.In other words, the robot interacts with people in various ways by changing their behaviours by exchanging information, i.e. two-way information-providing.A passive-social medium indicates that two (or more) robots offer information by interacting with each other through verbal/non-verbal behaviours; similar to a passive medium, the robots do not interact with users.A interactive-social medium indicates that two (or more) robots offer information as a medium like a passive-social medium, although the robots interact both with users and other robots [13].
Several past studies [23,24] have compared the effectiveness of a passive-social medium to a passive medium, i.e. concluding that observing the interaction between robots is more effective than observing a single robot's monologue in an information-providing context.Using multiple robots enables designers to assign different roles to them, allowing for more diverse expressions during interactions.Moreover, this interaction style allows a robot to abandon advanced sensing functions, simplifying robot deployment in the environment.
Similar to a passive-social medium, numerous studies have reported the advantages of an interactivesocial medium over an interactive medium in such contexts as information-providing, education, and behaviour changes.These studies highlight the importance of the number of robots.Even though the amount of information is identical between one robot and two robots in a conversation, people preferred and changed their behaviours while interacting with two robots.Although past surveys reported the effectiveness of both the passive-and interactive-social media [11,12], no systematic surveys related to a passive/interactive-social medium have yet been conducted.Considering the rapid growth of this research topic, a comprehensive survey will provide useful knowledge for researchers working on interaction designs for multiple social robots.
Methodology
This paper systematically reviewed six databases, which were chosen due to their extensive coverage of the field of Human-Robot Interaction (HRI) and related disciplines: IEEE Xplore, ACM, Springer, Science Direct, PubMed, and Google Scholar.The search strategy chose the following keywords: 'robot-robot interaction,' 'multiple social robots,' 'Passive social medium AND social robots,' 'Passive social media AND social robots,' 'Interactive-social medium AND social robots,' 'Interactive social media AND social robots,' and 'two (three, four . . .ten) social robots.'These keywords captured a broad range of studies that focus on the interaction designs of multiple social robots.The target period ranged from January 2018 to June 2023.This study focused on the last five years due to several critical technological advancements, such as learning models, sensor systems, and the construction of cost-effective robots, enabling researchers to simultaneously use multiple robots.
Studies for inclusion in the review were selected based on the following criteria: (1) those aimed at using multiple social robots and (2) those aimed at designing interactions between robots.I excluded studies when the authors used each robot separately (e.g.employing multiple robots but not simultaneously in evaluations) and interaction designs under a non-social context (e.g. a navigation algorithm in swarm robots without any people in the environments) because this review is focused on interaction design for multiple social robots working together.
The survey identified 770 documents (Figure 2).I removed articles that were repeated in more than one database, a step that reduced the number of documents to 582.Then I only selected documents based on the selection criteria described above by reading abstracts and full texts, winnowing the number to 60, including 5 additional articles from other sources.The details of each paper will be explained in the following section.
Number of studies with passive/interactive-social-medium approaches
The amount of published articles found from 2018 is shown in Figure 3, based on the types of interaction styles between robots.
In 2018 four papers addressed passive-social media, a number that dropped to two in 2019 and increased to six in 2020.The number dropped to three in 2021 and rose to five in 2022.As of June 2023, the number was three, which will likely grow by the end of the year.The general trend shows an increase around 2020, followed by slight decreases in 2021 and 2022.The COVID-19 pandemic seems a likely culprit for the recent research decreases because passive-social medium evaluations with multiple social robots can be conducted by web surveys whose popularity increased due to the difficulties of faceto-face experiment settings during the pandemic.The requirements of experiment time in such an approach are relatively quick, perhaps related to condensing the duration from experiment to publication.Although fears related to COVID-19 have subsided, the number of studies with passive-social approaches has remained rather steady.
Two papers addressing interactive-social media were published in 2018.That increased to six in 2019 and to ten in 2020.The passive-social-medium approach displayed a similar trajectory: slight decreases in 2021 and 2022.In the first six months of 2023, the number rose to six, suggesting a similar trend by the interactive-social medium's activity from 2020.The COVID-19 pandemic may again have caused this decrease.The number of studies in 2021 and 2022 decreased due to the difficulties of conducting face-to-face experiments and field trials, although its momentum has returned in 2023 as COVID-19's impact wanes.
Robot-control settings, environments and number of robots
In the appendix, Tables A1 and A2 summarise the studies with passive-and interactive-social medium, including the above information.
The characteristics of the surveyed studies from the viewpoints of robot-control approaches (autonomous and semi-autonomous) and their environments (video, lab, and field), are shown in Figure 4.Note that since some studies conducted video-based surveys in laboratory environments, the 'video' label includes both weband video-based surveys in laboratories.The studies with real robots, i.e. if robots were physically presented to participants, are labelled 'lab.'All the studies that employed a passive-social-medium approach used fully autonomous robot systems.Therefore, their behaviours do not need any support from human operators.On the other hand, many researchers employed human operators to control/support robot systems with a Wizard of OZ approach for natural interaction between robots and people.
Nine studies conducted field trials in the interactivesocial-medium approach, nearly double the passivesocial-medium approach.In field trials, attracting people in environments for service robots to provide services is important.Interactive-social-medium approaches enable robots to directly interact with people, an appropriate method compared to passive-social-medium approaches for providing services.
The number of robots used in each study is shown in Figure 5.Note that this value is the maximum number of robots that interact together.Most studies with a passive-social-medium approach used two robots, and a few used three robots to express multi-party conversation settings.In the interactive-social-medium approach, researchers also primarily used two robots, although the variety in the number is relatively diverse.In these studies, researchers focused on the effects of different numbers of robot compared to using two robots or simply employed multiple robots to provide richer services.
Exhibitions in real environments
This category includes studies that focused on exhibitions in real environments using a passive-social medium.Using a passive-social medium as an exhibition is a common approach because it enables robots to avoid difficulties caused by direct interaction with people in real environments.For example, Petrović et al. [25] developed a system with which multiple social robots interacted to create robotics co-actors with human actors.Nikolic et al. [26] created two robots that can discuss philosophical topics at exhibitions.They used a recurrent neural network to trigger a human-less creative process by interaction and a serial neck mechanism to express different emotions using non-humanoid robots.For this purpose, they used a recurrent neural network to trigger a human-less creative process by interaction and a serial neck mechanism to express different emotions using non-humanoid robots [27].Krzyżaniak [28] developed a musical robot platform that enables social robots to collaborate in the context of musical interaction.The work developed a simple responsive rhythm synthesiser and analyzed the equilibria that arise when multiple robots play music together.Nijholt [29] investigated how a passive-social medium is useful to express various interaction styles in the context of stand-up comedy.The work surveyed three kinds of trials that tried to make people laugh by interaction among multiple social robots and discussed humour in the context of humanrobot interaction.Swaminathan et al. [30] also focused on robot comedy performance using multiple social robots and conducted a street-style performance to gather audience responses.
Supporting children
This category includes studies where the participants are mainly children or those whose systems aim to support children instead of adults.Similar to exhibitions in real environments, the researchers employed a passivesocial-medium approach to support children by showing interaction among multiple social robots.For example, Brink et al. [31] investigated whether 3-year-olds can learn the names of novel objects from a pair of social robots or inanimate machines, i.e. investigating the capabilities of a passive-social medium using different kinds of robots by video stimuli.They reported that although children can learn more from social robots (and trust them) compared to inanimate machines, they trust their information more when the robots appear to have mindful agency.So et al. [32] investigated the effects of a passive-social medium with multiple social robots to promote responses to the joint attention abilities of low-functioning autistic children.They described how a robot drama provided support for children over time and argued that the children's abilities improved after observing such robot dramas.Peng et al. [33] designed a simulated multi-robot theatre with three similarly-shaped robots and conducted an experiment with children who watched a video and retold its story.Through observing interactions between children and robots, they concluded that contextual behaviours with emotional expressions by robot performances can be understood by children over six years old.
Relationships between robots and people
This category includes studies that focused on investigating how relationships between robots influence observers.Researchers used a passive-social-medium approach to express different relationships between robots (and people) to investigate how showing robotrobot interaction influenced the perceived human impressions of the robots.Xu et al. [34] investigated the psychological effects of the behaviours of multiple robots, as such actions are related to perceptions of social rejection.They conducted a video-based study where multiple robots showed rejection behaviours toward a human experimenter and reported that a sense of rejection rises when robotic groups are less cohesive.Ueno et al. [35] investigated how interaction between robots influences the perceived impressions of observers.Their experiment compared two situations where a robot named NAO interacts/doesn't interact with another robot called Roomba with verbal/non-verbal behaviour.Observers positively evaluated the interaction of the robots.Thus, they reported that a passive-social-medium approach increased positive impressions of the robots.
Verbal expression design
This category includes studies where researchers employed a passive-social-medium approach to investigate the effects of verbal expression designs between robots, i.e. focusing on what factors in verbal interaction influenced the perceived impressions.Singh et al. [36] increased the comprehension of the observers of verbal expressions in interactions among multiple social robots by implementing conversational functions for a group of robots based on Grice's maxim of quantity.They conducted a videobased survey and reported that their implemented system led to the highest amount of understanding of the robots' actions by participants (Figure 6(a)).Velentza et al. [37] investigated the effects of conversational styles among multiple social robots by preparing videos with different storytelling styles (serious, cheerful, friendly, and expressive movements) and conversational interaction between robots.Their participants preferred the one who originally told them the story and disliked the collaboration with a robot that had an extremely friendly attitude and storytelling style, indicating the importance of appropriate verbal expression design in interaction between robots (Figure 6(b)).
Some researchers focused on the effects of verbal expressions between robots under specific situations.Itahara et al. [38] focused on discussions between robots and different opinions and investigated how such discussions changed the opinions of observers.They built a web-based survey around two patterns of opinion changes to a different side and two patterns of opinion reaffirmation.They argued that observing opinion changes from the positive side (i.e.negative-positive) or positive opinion reaffirmation (positive-positive) effectively provided positive and fair impressions.Observing an opinion change that became negative effectively provided negative and fair impressions, although negative opinion reaffirmation (negative-negative) led to significantly less trust in media.Karatas et al. [39,40] focused on situations where participants overheard conversations among three driving agents under driving situations and investigated how the drivers (i.e.participants) were affected by observing conversations among the three driving agents.They developed a social interface named NAMIDA, which provides location-based information from conversations with each other, i.e. drivers gain such information by basically eavesdropping on their conversations.They reported that the passive-social-medium approach more effectively decreased the workload than the direct interaction approach between the drivers and the robots.
Non-verbal expression design
Non-verbal expression design in interaction between robots is another active research topic in studies that focus on the passive-social-medium approach, particularly gestures in conversational interaction.This category includes studies that focused on the effects of nonverbal expressions represented by multiple robots, e.g. the robot-number effects in similar motions and non-verbal interaction between robots.
For example, Mizumaru et al. [41] investigated the perception of emotional relationships through body expressions between multiple social robots using video stimuli.They applied different combinations of four characteristic body emotion expressions (sadness, fear, pride, and happiness) and described how the emotional movement of each robot's body influenced the relationships between the two robots and interpreted them using the valence-arousal model.Wicke et al. [42] developed a storytelling system using multiple social robots by focusing on pantomime and naturalistic gestures to enhance the storytelling contents.They conducted a web-based survey with videos, i.e. a passive-social-medium approach, and evaluated their developed system using spatial movements in embodied performances.
Some researchers focused on the physical interaction between robots, e.g.touching behaviours.Hirayama et al. [43] investigated the effects of motion parameters when a robot touches another robot.They prepared multiple videos with different touch behaviours using multiple social robots in a web-based survey, identified a speed boundary between patting and slapping behaviours, and described the appropriate speed parameters to express positive/negative relationships between robots.Okada et al. [44] experimentally investigated the perceived impressions of the conversational content of the effects of touching and whispering behaviours between robots and described the effectiveness of both between robots in information-providing tasks.
A few studies focused on the effects of non-verbal behaviours under specific situations, e.g.gaming, guiding, and apologising.LC et al. [45] focused on nonverbal emotional expressions when multiple arm robots played a game.They developed a multiple robot-control system and evaluated how a robot's behaviour changed the perceptions of emotional impressions in game settings by video stimuli, concluding that such knowledge of arm-type robots is useful for the behaviour design for non-humanoid robots.Kondaxakis et al. [46] proposed appropriate pointing gestures to align symbols in robotrobot interaction.Their main aim is not evaluations from human observers but achieving non-verbal interaction between robots for designing and automating a passive-social medium with multiple social robots.They conducted simulation experiments to evaluate the effectiveness of their system and demonstrated with a heterogeneous two-robot system the practical viability of this approach.Okada et al. [47] investigated the effects of the number of robots in apology settings.They prepared video stimuli where a robot apologised for its mistake in a cafeteria setting and investigated such actions with an additional robot.They conducted a web-based survey using videos and concluded that apologies from multiple robots are more acceptable than from a single robot.
Field trials for customer services and analysis
Several researchers employed an interactive-social-medium approach for field trials where multiple robots provided services to customers in real environments.This category includes studies that focused on such trials in real environments.Barbareschi et al. [48] reported a parallel tele-operation system that enabled disabled workers to control multiple robots in a cafeteria, where an operator controlled multiple robots that provided customer services.Moreover, since these controlled robots interacted with another robot managed by a different operator, they functioned as an interactive-social medium for customers.Using multiple social robots controlled by an operator at a bakery, Song et al. [49] also conducted recommendation services based on collaboration with multiple robots and showed the effectiveness of collaborating between robots.Iwamoto et al. [50] investigated the effectiveness of a playful recommendation system where robots stimulated pleasant feelings using multiple social robots with an interactive-social-medium approach by showing conversations between the robots as well as the effectiveness of self-recommendation robots (Figure 7(a)).From another perspective, Kamino et al. [51] conducted an ethnographic observational study in a cafeteria and reported such interaction between people and multiple robots as pet-type robots.
Other researchers used multiple social robots in real environments as a reception service.Nakanishi et al. [52] developed an interactive-social medium using multiple robots that engage in friendly interactions with hotel customers.They deployed their system in a hotel's public area and collected customers' impressions.The perceived impressions of the robots were influenced by customer gender and interaction durations.Aizawa et al. [53] focused on interaction design to encourage visitors to use guiding robots in public spaces and investigated the effects of different factors, such as the numbers of robots and their non-verbal behaviours.Amada et al. [54] investigated pseudo-crowd effects using different numbers of multiple social robots to attract the interest of passersby in advertising contexts (Figure 7(b)).They compared the effects of the number of robots and their verbal/nonverbal behaviours for attracting visitors who approached the robots and described the importance of speaking the guidance information and looking in various directions.
Supporting children
Similar to a passive-social medium, an interactive-social medium can support children.This category includes studies that focused on supporting children with an interactive-social medium.Compared to a passive-social medium, an advantage of the interactive-social-medium approach is its direct interaction with children, a situation that may stimulate their curiosity.Some researchers focused on the effectiveness of using multiple social robots to support ASD/ADHD children.Efthymiou et al. [55] developed an integrated robotic system (ChildBot) that can participate in and perform a wide range of educational and entertainment tasks.Their system enables multiple robots to actively interact with children, and their user experience study showed that children enjoyed playing with different robots.Soleiman et al. [56] also developed a robot system consisting of multiple social robots for stimulating social environments for children.Their case study showed how their system helps children with autism and reported that it improved emotion recognition skills through interaction with robots.Esfandbod et al. [57] investigated the benefits of using a multiple robot system toward educational interventions for enhancing children's engagement, attention, and retaining novel words.Lytridis et al. [58] also investigated the effectiveness of an interactive-social medium using multiple social robots to facilitate and enhance interventions in special education.They used multiple heterogeneous robots in ASD interventions and conducted an experiment with children, concluding that they demonstrated a high engagement level and an eagerness to participate in activities through interaction with an interactive-social medium.Amanatiadis et al. [59] investigated the effectiveness of interaction between an interactive-social medium and multiple children with autism, i.e. group interaction between robot and children.They investigated whether children played with other children to assess the benefits of a more naturalistic and interactive type of therapy.They concluded that interaction with multiple social robots indicates positive effects in participants' communication and interaction skills, joint attention, and cognitive flexibility.
Other researchers focused on supporting typically developing children.Tamura et al. [60] created an interactive storytelling system for children using multiple social robots.They assigned different roles to a robot, a reader, and a listener to attract children's interest by conversations between social robots and reported that storytelling with multiple social robots attracted more children than just one robot.Alemi et al. [61] also described a case study's results using multiple social robots controlled by an operator.Although their study did not report any advantages of using multiple social robots, they did describe their effectiveness in the context of education settings.
Collaborative behaviour design
Because an interactive-social medium involves people linked through robot-robot interaction, the behaviour design of multiple social robots and such system designs is becoming more complex.This category includes studies that focused on how to design collaborative behaviour between robots through both system developments and design workshops.Tan et al. [62] created a system that collaborates with two social robots (reception and mobile) to investigate the effects of different robotrobot interaction.They compared different communication strategies between robots and suggested the possibility of instilling socialness to improve the likability of a functional robot by having a social robot interact with it.They also conducted three design workshops to gather ideas on how multiple robots can work together to provide services and offered several guidelines and open questions as an outcome of the workshops [63].Moreover, they investigated different strategies concerning how a mobile robot might join an existing multi-modal interaction between a person and a stationary robot and reported that an improper strategy, i.e.where a mobile robot is standing too far away from a person and a stationary robot, repositioned the robot self to decrease the distances between interactants [64].Correia et al. [65] developed a platform for playing a digital game that involves a social dilemma between a mixed team of humans and robots.They did not conduct any evaluations of their system, although their proposed architecture for playing games with multiple social robots might provide useful knowledge for researchers.Other studies focused on the effectiveness of the number of robots in specific situations, such as providing social rewards [66] and mealtime conversations [67], and both reported that two robots are better than just one.
Relationships between robots and people
Researchers have also used an interactive-social-medium approach to express more complex relationships between robots and people and delve into how the relationship of robot-robot-human interactions influenced the perceived impressions.This category includes studies that focused on the relationships between robots and people in interactions.Erel et al. [68] investigated how a robot-robot-human interaction, i.e. an interactive-social medium, can lead to ostracism using non-humanoid social robots with different attitudes toward the participants.They also investigated how the social experiences of exclusion or inclusion with two non-humanoid robots shaped participants' interactions with others [69].They concluded that such experiences may produce carryover effects that extend beyond the interaction with robots, impacting interaction with others.Söderlund [70] investigated how robot-robot interaction provides impressions of warmth toward interlocutors.They conducted an experiment with Wizard of OZ settings to control two different robots, where a service robot displayed warmth toward another service robot during interactions with participants.Their experiment results showed that the service robot's high level of warmth boosted the participants' overall evaluations.
Verbal expression design
Similar to studies with a passive-social-medium approach, researchers have investigated the effects of verbal expression design between robots with the interactive-socialmedium approach.Iio et al. [71] proposed an approach that conceals incoherence using a double-meaning agreement among multiple robots and reported that the two robots using it produced better feelings of being understood than those who talked with one robot.They also focused on the effectiveness of such conversational robots against speech recognition failure in a conversation with seniors by comparing the number of robots [72] (Figure 8(a)).Arimoto et al. [73] also investigated effective conversational patterns for concealing incoherent responses using multiple social robots to investigate the effects of the number of robots.
Samson et al. [74] investigated the effectiveness of different two-party conversational voice guidance in driving situations and described how such conversations allowed drivers to better reflect on their choices after finishing driving tasks.Goto et al. [75] assigned different roles to multiple social robots and investigated how humans changed their decisions and behaviours in a moral dilemma environment through interaction with multiple robots that played different roles.Velentza et al. [76] also investigated the combination effects of different roles between two museum guide robots and reported that people remember more information when they are guided by two cheerful robots than by two serious ones.
Nishio et al. [77] developed an effective information medium using two android robots and compared the effectiveness of a passive-social-medium condition with an interactive-social medium (they called it 'semipassive,' although in this condition the robots interacted with the participants) and argued that participants recalled more content from the conversations and felt more empathy for the robots (Figure 8(b)).
Non-verbal expression design
Non-verbal behaviour design is also essential for natural interaction between a passive-social medium and interacting people.Golcic et al. [78] focused on social movements for different types of robots.They gathered social movement data using depth sensors and applied these movements to robot systems to investigate how such movements increase the interactivity of robots.They also used their system with humanoid robots and reported that participants had positive reactions to their interactions with them.Fraune et al. [79] investigated how robot behaviour toward humans and other robots affected interactions using minimally social robots.They conducted a video-based survey and compared the cultural differences between Japan and the U.S. as well as a laboratory experiment where American participants did simple tasks with two robots that behaved differently toward them.The robot behaviour toward social robots instead of functional robots increased the anthropomorphism of the robots.
Some researchers focused on behaviour design during conversations between robots and people, such as gazing behaviour.Eshed et al. [80] used an interactive-social medium to deepen understanding of the psychological determinants of behaviour in a novel social interaction.In this study, the participants non-verbally interacted with three social robots, and the work analyzed how the gaze behaviours of the former changed during interactions.Based on their gaze behaviour analysis, they concluded that information-gathering behaviour is initially predicted by psychological inflexibility and subsequently by curiosity toward the interaction's conclusion.Oertel et al. [81] investigated effective listening behaviours for multiple social robots that participate in multi-party interactions as an interactive-social medium.They analyzed listener behaviour in human-human, multi-party interaction and implemented an attentive listening system that generates multi-modal listening behaviour for social robots.They experimentally showed the advantages of their system in multi-party interactions among multiple humans and robots.
Persuasive robotics
One unique approach for using an interactive-social medium is for persuasive interaction, such as peer pressure and conformity; this category includes studies related to this research topic.Shiomi et al. [82] investigated the effects of peer pressure from multiple social robots.They conducted experiments with two, four, and six robots to investigate how people's behaviour changed due to social pressure from the robots and reported that six robots had stronger persuasive powers than the other amounts of them.Salomons et al.
[83] also investigated the effects of peer pressure from multiple social robots and how trust affects conformity.They experimented with three social robots and concluded that groups of robots led people to conform when they trusted the robots; losing trust caused such conforming to stop.Hashemian et al. [84] also investigated the persuasive effects of an interactive-social medium with two social robots.They compared two types of persuasive strategies based on social power, reward, and expertise in a situation where two robots attempted to persuade a user to make a concrete choice.Both were similarly persuasive, although the perceived competence and warmth were different between the strategies.
Advantages and disadvantages of passive-and interactive-social medium
In this subsection, I discuss the advantages and disadvantage of the passive-and interactive-social media in the development of multiple robot systems.The typical advantage of a passive-social medium is the simplicity of the behaviour designs between robots, which do not consider direct conversational interaction with observers.For example, robots may need to change such non-verbal behaviours as gaze direction toward visitors during conversational interaction between robots, although they do not need to consider changing conversational flows due to interruptions from observers.This advantage simplifies developing robot content and lowers the burden on the sensor system to capture people's information.Ironically, this advantage can be a disadvantage in a passivesocial medium, because it cannot conduct deeper interaction with observers due to such limited interaction capabilities as sensing information and fixed conversational interaction.Overall, the passive-social medium is suitable for developing robot-based content that needs no consideration of interaction with others, such as exhibitions, commercial, and one-way information-providing services.
The typical advantage of an interactive-social medium is rich interaction styles with people, allowing interruptions in robot-robot interaction.For example, when two robots discuss specific content, visitors can interrupt with questions.Moreover, by using multiple robots in conversations, robots can conceal incoherence and provide a better atmosphere than a single robot [71][72][73].However, this advantage is directly related to a disadvantage of the interactive-social medium: the need for a complex system to allow interruptions of multiple robot conversations.Unlike a conversation with a single robot, managing the content of multiple robot conversations for interruption is more complex and complicates avoiding the corruption of interaction with people.For this purpose, a rich sensing system is also needed.Overall, the interactivesocial medium is suitable for developing robot-based content that provides rich interaction experiences for people through two-way information-providing tasks.
Note that the effect of the number of robots is a common discussion topic in both passive-and interactivesocial media.Past studies have already shown the effectiveness of using multiple robots in various contexts, such as effective information-providing, attracting people's attention, and better attitudes of robots.[13,22,54].Therefore, although using multiple robots increases the costs and burdens for preparing such content as verbal/non-verbal behaviour designs of robots, this approach offers sufficient merit for developing social robot services.
Number as a non-verbal modality
The passive/interactive-social-medium approach can be regarded as a way for robots to use their numbers as a new non-verbal modality in social interactions with people.Robotics researchers have already used several non-verbal modalities for social robots, including facial expressions, whole-body gestures, voice characteristics, approaching trajectories, etc.However, the amount of information expressed by such modalities is limited by physical and time constraints.Using more robots will probably increase the amount of information that a robot system can represent as a whole.In other words, using multiple robots avoids a limitation on the amount of nonverbal information per unit time expressed by a single robot.
Related to this perspective, investigating robot numbers complicates behaviour designs for them.For example, robotics researchers need to design timing and consistent behaviour among facial expression, voice characteristics, and gestures for creating multi-modal behaviours like emotional expressions.Inappropriate behaviours and interaction designs among multiple robots may foment negative impressions from the interacting people.Furthermore, since the appropriate modality for conveying information and its design will differ by particular tasks, the appropriate number of robots must also be changed depending on the situations.Although most of the papers examined in this study compared one and two robots, comparisons with a larger number of robots will lead to appropriate interaction designs for multiple social robots.
Challenges and direction for future work
Due to the decreasing impact of COVID-19, studies with interactive-social media are once again increasing.Because robotics companies have been employing multiple robots to provide services in real environments, these robots need to interact and collaborate in social contexts, e.g.greetings between robots with (non-)verbal behaviours.Although such behaviours are unnecessary for communication between robots, manifesting social relationships between robots conveys positive feelings to the people around them.
Therefore, an interaction design based on a passivesocial-medium approach among multiple social robots is essential for achieving a socially acceptable existence while multiple social robots work together in real environments.
The interactive-social-medium approach is also crucial for raising the quality of the services provided by multiple social robots.When they work together in such actual environments as cafeterias, not every robot will always be as fully operational as humans.In such a situation, robots without specific tasks can help another robot, not only with physical tasks but also with such social tasks as joining conversations, providing additional information, showing hospitality, etc.Based on these contexts, incorporating a passive/interactive-social-medium approach for the behaviour design among multiple robots is an essential future work for robotics companies that use multiple robots in their services.
One future work from a research perspective is achieving multiple social touch interactions in an interactivesocial-medium approach.Although many kinds of conversational interactions have been investigated with nonverbal behaviour designs, such as touch interaction, which is an essential modality in human-human interaction, such approaches remain understudied in the context of an interactive-social medium.Past studies have shown that the number of robots influences changes in people's behaviours, not the amount of information in verbal modalities.Similar to this perspective, investigating whether the amount of information in a touch modality (e.g.comparing two touches from a single robot and a single touch from two robots) is effective for behaviour change is an interesting future work from an interactivesocial-medium approach.
Conclusion
This study surveyed the recent research activities related to interaction designs for multiple social robots under social contexts in the field of human-robot interaction.Robotics researchers simultaneously use multiple social robots based on passive/interactive-socialmedium approaches for rich interaction with people.Researchers have employed both approaches to investigate how multiple social robots influence people's behaviours and impressions through both direct and indirect interactions between groups of social robots and people.Using multiple social robots is a promising approach to extend the capabilities of social interaction for such robots by expressing the social relationships between them and enabling them to use a new non-verbal modality of numbers.
Figure 1 .
Figure 1.Concept of passive/interactive-social medium based on a previous work [13].
Figure 2 .
Figure 2. Flow diagram used in literature review in this study.
Figure 3 .
Figure 3. Number of studies with passive/interactive-social-medium approaches.
Figure 4 .
Figure 4. Characteristics of studies: robot control and environments.
Figure 5 .
Figure 5. Number of robots in surveyed studies.
Table A2 .
Summary of studies with interactive-social-medium approaches. | 8,527.4 | 2023-12-30T00:00:00.000 | [
"Computer Science",
"Sociology",
"Engineering"
] |
The lifespan estimates of classical solutions of one dimensional semilinear wave equations with characteristic weights
In this paper, we study the lifespan estimates of classical solutions for semilinear wave equations with characteristic weights and compactly supported data in one space dimension. The results include those for weights by time-variable, but exclude those for weights by space-variable in some cases. We have interactions of two characteristic directions.
Introduction
In this paper, we focus on the study of a model equation for the purpose of extending the general theory of nonlinear wave equations. Before stating our target, we first overview the general theory for nonlinear wave equations in one space dimension. Consider the following Cauchy problem: where f, g ∈ C ∞ 0 (R) and ε > 0 is a small parameter. Let λ = (λ; (λ i ), i = 0, 1; (λ ij ), i, j, = 0, 1, i + j ≥ 1). Assume that H = H( λ) is a sufficiently smooth function with in a neighborhood of λ = 0, where α ∈ N. Let us define the lifespan T (ε) as the maximal existence time of the classical solution of (1.1) with arbitrary fixed non-zero data. The general theory for the problem (1.1) is to express the lower bound of the lifespan according to α and the initial data. Li, Yu and Zhou [9] have constructed the general theory for this problem: It should be noted that Morisawa, Sasaki and Takamura [8] point out that there is a possibility to improve the above general theory by studying H = |u t | p + |u| q so-called "combined effect" nonlinearity. For the detail, see the end of Section 2 in [8].
We set H = |u| p in (1.1) which is a model to ensure the optimality of the general theory. Note that Kato [5] showed the blow-up result for any p > 1. Zhou [13] has obtained the following lifespan estimates for p > 1: Here, the definition of T (ε) ∼ A(C, ε) is given as follows: there exist positive constants C 1 and C 2 which are independent of ε satisfying A(C 1 , ε) ≤ T (ε) ≤ A(C 2 , ε). The differences between the lifespan estimates of (1.2) come from Huygens' principle which holds if the total integral of the initial speed g vanishes. See Lemma 2.1 below. Our motivation is to extend the general theory to the non-autonomous non-linear terms: H = H(x, t, u, u t , u x , u xx , u xt ).
In order to look for the assumptions on x and t in H, we shall investigate the following model equations in this paper: where p > 1 and F ∈ C 1 (R × (0, ∞)). Our purpose is to get the lifespan estimates for the problem (1.3) with the characteristic weights: where x := (1 + x 2 ) 1/2 for x ∈ R and a, b ∈ R. The meaning of the "characteristic" is originally used as t + |x| and t − |x|. We have changed it to (1.4) by using · , because we need the regularity to get the classical solutions and to avoid the singularity when the power of the exponent is negative. However, this modification is not essential because of Lemma 2.2 below. Let us denote by T (ε) the maximal existence time of the classical solution of (1.3) with arbitrary fixed non-zero data. Then, we have the following main results: T (ε) = ∞ for a + b > 0 and a > 0, (1.5) for a + b = 0 and a > 0, or a = 0 and b > 0, exp(ε −(p−1)/2 ) for a = b = 0, ε −(p−1)/(−a) for a < 0 and b > 0, φ −1 1 (ε −(p−1) ) for a < 0 and b = 0, ε −(p−1)/(−a−b) for a + b < 0 and b < 0 (1.6) if R g(x)dx = 0, where φ −1 1 is an inverse function of φ 1 defined by φ 1 (s) = s −a log(2 + s), and ) for a = 0 and b > 0, exp(ε −p(p−1) ) for a + b = 0 and a > 0, exp(ε −p(p−1)/(p+1) ) for a = b = 0, ε −(p−1)/(−a) for a < 0 and b > 0, ψ −1 1 (ε −p(p−1) ) for a < 0 and b = 0, ε −p(p−1)/(−pa−b) for a < 0 and b < 0, ψ −1 2 (ε −p(p−1) ) for a = 0 and b < 0, ε −p(p−1)/(−a−b) for a + b < 0 and a > 0 where ψ −1 1 and ψ −1 2 are inverse functions of ψ 1 and ψ 2 respectively defined by ψ 1 (s) = s −pa log(2 + s) and ψ 2 (s) = s −b log p−1 (2 + s). (1.9) We explain the background to considering the form of the weight function F in (1.4). Firstly, the pointwise estimates of the wave equations have a characteristic factor such as (1.4). A natural question arises how the lifespan estimates change as compared with those for non-weighted case, (1.2). Secondly, our equations include some damped wave equations which were treated by many previous works. For example, let us consider the following Cauchy problem for the nonlinear damped wave equations: (1.10) Then, if we set u(x, t) = (1 + t)v(x, t) (Liouville transform), we have u(x, 0) = εf (x), u t (x, 0) = ε{f (x) + g(x)} for x ∈ R. (1.11) When p > 3, D'Abbicco [2] showed that the energy solution of (1.10) exists globally in time. On the other hand, Wakasugi [12] has obtained the blowup results of the problem (1.10) for 1 < p ≤ 3. We note that they treated more general damping terms µv t /(1 + t) (µ > 0) in (1.10). For the lifespan estimates in (1.11), Wakasa [10] obtained T (ε) ∼ ε −(p−1)/(3−p) for 1 < p < 3, exp(ε −(p−1) ) for p = 3 if R {f (x) + g(x)}dx = 0 (1.12) and Kato, Takamura and Wakasa [6] obtained is a positive number satisfying ε 2 b log(1+b) = 1. Here, we note that the critical exponent 3 which is a threshold between the global-in-time existence and the blow-up in finite time is the so-called Fujita exponent in one space dimension. Remarkably, our lifespan estimates (1.8) with a = p−2 and b = −1 coinside with (1.12) and (1.13). This is because, 1 + t is equivalent to t + x by the finite propagation speeds. See (2.1) and Lemma 2.2 below. We next mention the results to the case where F in (1.3) is a spatial weight only as F = x −(1+b) with b ∈ R. Kitamura, Morisawa and Takamura [7] have obtained the lifespan estimates, and where φ −1 and ψ −1 are inverse functions defined by φ(s) = s log(2 + s) and ψ(s) = s log p (2 + s), respectively. The differences between F = (1 + t) −(p−1) and F = x −(1+b) in the lifespan estimates come from the decay property of the weights near the origin. Indeed, for the time weights, there is a possibility that we get the global existence for p > 3 due to the decay of the nonlinear term even if |x| is small. But there is no such a situation for the spatial weights. Finally, we remark the setting of the weight function F in our problem in (1.3). If F = t + x −(1+a) , we cannot expect to obtain the global existence result, because we have no decay property along t + x = 0 when x < 0. On the other hand, when x > 0, we have the decay property for t + x −(1+a) near the origin. For this reason, we get the lifespan estimates similar to the time weights. Therefore, it is necessary to consider the weighted functions t + |x| −(1+a) and t − |x| −(1+b) separately. In addition, if F ∈ C 1 (R × (0, ∞)) does not hold, we have no chance to obtain the classical solution of (1.3) even locally in time.
Before stating our main results, we assume some conditions for the initial data.
Here, we address our lifespan estimates in the (a, b)-plane for the convenience of explanation. Figure 1 is related to Theorems 1.1, 1.2 and 1.4. Figure 2 is related to Theorems 1.1, 1.3 and 1.5. Remark 1.1. We compare all the lifespan estimates when a = −1 in our results as for F = t − x −(1+b) with those for F = x −(1+b) by Kitamura, Morisawa and Takamura [7]. If the total integral of g does not vanish, they coincide with each other. However, if the total integral of g vanishes, it does not hold. The reason for this situation can be described as follows. The triangle domain which has a vertex (x, t) in the following figure is the domain of the integral of the Duhamel term, (2.3) below.
When b > −1, we cannot derive any decay of the nonlinear term from the spatial weights y −(b+1) along y = 0. On the other hand, we cannot derive any decay of the nonlinear term from characteristic weights s − |y| −(b+1) along s − |y| = 0. When b < −1, for the weights y −(b+1) , the growth of the solution appears on s − |y| = 0. On the other hands, for the weights s − |y| −(b+1) , the growth of the solution appears on y = 0. Keeping this situation in mind, we shall step into our purpose.
Set a = −1 as F = t − |x| −(1+b) . If we assume (1.17), then the Huygens' principle (Lemma 2.1) does not hold. Hence, we have that u ∼ O(ε) in its support. See the construction of the solution in the function space Y 1 below at the end of section 3. This fact helps us understand that the lifespan estimates in Theorem 1.2 and Theorem 1.4 are the same as that of (1.14).
In contrast, if we assume (1.18) which implies that Huygens' principle holds, we have that u ∼ O(ε p ) in the interior domain {t + |x| ≥ R and t − |x| ≥ R} while we have that u ∼ O(ε) in the exterior domain {t + |x| ≥ R and |t − |x|| ≤ R}. See the construction of the solution in the function space Y 2 below at the end of section 4. For F = x −(b+1) (b > −1), the solution u does not decay along y = 0 which is located in the interior domain. On the other hands, for F = t − |x| −(b+1) , the solution u does not decay along s − |y| = 0 which is located in the exterior domain. This fact helps us understand that the lifespan estimates in Theorems 1.3 and 1.5 are smaller than those of (1.15) when b > −1. Conversely, in view of the considerations above, when b < −1, the lifespan estimates in Theorems 1.3 and 1.5 are larger than those of (1.15). Remark 1.2. We remark that the weight function F of t + x and t − x can cover a large class of power-type weights. For example, let us consider F = t + x 2 −(a+1) . When a > −1, we may have the same results as for F = t −(a+1) because x 2 in F has a minor effect. On the other hands, when a < −1, we may have the same results as for F = x 2 −(1+a) because x 2 in F has a major effect. It is sufficient to consider the linear combinations of x and t in the weighted functions.
Let us consider F = t − C|x| −(b+1) , where C is a positive constant. If 0 < C < 1 and t − |x| > 0, then we have that so that the lifespan estimates may coincide with that for the time weights. On the other hands, if C > 1, the line s − C|y| = 0 in (s, y)-domain of the integral of the Duhamel term is located in the interior domain in Remark 1.1. Therefore, the lifespan estimates may coincide with that for the spatial weights.
This work was almost completed during the period when the first author was in the master course of Mathematical Institute of Tohoku University and the second author had the second affiliation of Research Alliance Center of Mathematical Sciences, Tohoku University. This paper is organized as follows. In the next section, we set the notation and prove some lemmas needed later. The proofs of Theorem 1.1 with (1.17) and Theorem 1.2 are established in Section 3. In Section 4, we prove Theorem 1.1 with (1.18) and Theorem 1.3. Finaliy, we prove Theorem 1.4 and Theorem 1.5 in Section 5. All the proofs in this paper are based on the point-wise estimates of solutions which are originally introduced by John [4].
Preliminaries
In this section, we set the notation and prove some useful lemmas. Assume that u ∈ C 2 (R × [0, T ]) is a solution to the Cauchy problem (1.3). Then the following finite propagation speeds holds: For the proof, see Appendix of John [3]. We define and is the solution to the Cauchy problem (1.3).
The following lemma is so-called Huygens' principle which is essential for the proof of main theorems.
For the proof, see Proposition 2.2 in [7]. and Proof. We only prove Others are trivial by taking the difference between the squares of right-hand side and left-hand side. It is easy to see that Then, we see ✷ Finally, we introduce the following domains to estimate the solutions: Note that the lifespan is determined by the point-wise estimates of the solution in D Int .
Proofs of Theorem 1.1 and Theorem 1.2
We define L ∞ -norm of u by The following a priori estimate plays a key role in the proofs of Theorem 1.1 and Theorem 1.2.
where E 1 (T ) is defined by Proof. We denote by C a positive constant which depends only on p, a, b, R and j may change from line to line. The definition of (2.3) gives us Then, the inequality (3.2) follows from Thus, we show (3.6) in the following. Without loss of generality, we may assume x ≥ 0 due to the symmetry of I(x, t) as I(x, t) = I(−x, t) holds by (3.5). Changing variables by in the integral of (3.5), we get Here we set We first state the following lemma which is useful to estimate the above integrals.
Lemma 3.2. Let q ∈ R and µ, ν ≥ 0. Then there exists a positive constant The proof is easy so that we shall omit it. and Proof. We show the inequality (3.9) only as (3.10) is trivial. The assumptions give us Case 1. a > 0 and a + b > 0, or a > 0 and a + b = 0, or a > 0 and a + b < 0.
Making use of (3.8) with q = a(< 0) and (3.10) to the α-integral, we obtain It follows from (3.8) with q = b and (3.10) that Summing up all the estimates, we get Next, we shall estimate I 12 in D Int . Noticing that β ≤ t − x ≤ t + x, the estimates of this integral are the same as the estimates for I 11 , which is replaced by α and β. So, we have Next, we shall estimate I 13 and I 14 in D Int . We note that both the βintegral in I 13 and α-integral in I 14 are bounded by C. Thus, we get Making use of (3.8) with q = a, (3.9) and (3.10) to the above integrals, we obtain It follows from T + R ≥ 1 and log(T + 3R) ≥ 1, (3.12) (3.11) and the definition of Next, we shall estimate I 21 and I 22 in D Ext ∪ D Ori . First, we investigate them in D Ext . Since |t − x| ≤ R holds for (x, t) ∈ D Ext , we get For t − x > 0, we have Making use of (3.12) for the estimates of I 22 , we get where we have used β ≥ −R. It follows from (3.9), (3.10) and (3.12) that Finally, we shall estimate I 21 and I 22 in D Ori . Noticing that |t − x| ≤ R and t + x ≤ R hold for (x, t) ∈ D Ori , we obtain Making use of (3.12), we get Summing up all the estimates, we obtain (3.6). Therefore the proof of (3.2) is established. ✷ Proofs of Theorem 1.1 and Theorem 1.2. Let us define a Banach space which is equipped with the norm (3.1). Define a sequence of functions {u n } n∈N by where L and u 0 are defined in (2.3) and (2.2) respectively.
We shall construct a solution of the integral equation (2.4) in a closed subspace Because it follows from (2.2) and the assumption for (f, g) that |u 0 (x, t)| ≤ εC 0 . Then, analogously to the proof of Theorem 1.2 in [11] with M : 4 Proofs of Theorem 1.1 and Theorem 1.3 We define the following weight function: Denote a weighted L ∞ -norm of U by Then we have the following a priori estimates.
where D(T ) is defined by Suppose that the assumptions of Theorem 1.1 and Theorem if a + b < 0 and a > 0.
Proof of Lemma 4.1. We denote by C a positive constant which depends only on p, a, b, R and j may change from line to line. Making use of (2.5), we have Then, the inequality (4.3) follows from Thus, we show the above inequality in the following.
Without loss of generality, we may assume x ≥ 0 due to the symmetry of J(x, t) in x. Changing variables by (3.7), we obtain Here we set First, we shall estimate J 11 in D Int . Noticing that by α ≤ t + x, (3.9) and (3.10), we obtain Next, we shall estimates J 12 in D Int . Because of β ≤ t − x, we get where we have used (3.9) and (3.10). Hence we obtain Making use of (3.13), we obtain We next estimate J 21 and J 22 in D Ext . Because of |t − x| ≤ R, we get Thus, the desired estimates for J 21 and J 22 are established. Finally, we shall estimate J 21 and J 22 in D Ori . Noticing that |t − x| ≤ R and t + x ≤ R hold for (x, t) ∈ D Ori , we obtain for t − x > 0. It follows from w(|x|, t) ≥ 1 and (3.12) that Therefore, we get (4.3). ✷ Proof of Lemma 4.2. We denote by C a positive constant which depends only on p, a, b, R and j may change from line to line. The definition of (2.3) gives us Then, the inequaltiy (4.4) follows from (4.5) Thus, we show (4.5) in the following. Without loss of generality, we may assume x ≥ 0 due to the symmetry of J (x, t) in x. Changing variables by (3.7), we get Here we set Case 1. a > 0 and a + b > 0, or a > 0 and a + b = 0, or a > 0 and a + b < 0.
Because of w(|x|, t) = 1, the desired estimates are established by the same manner as that of Case 1 of Lemma 3.1. We shall omit the detalis. We shall estimate J 11 in D Int . The definition of (4.1) yields Let a = 0 and b > 0, or a = 0 and b = 0. Making use of (3.8), we get It follows from (3.9) and (4.6) that For the case of a = 0 and b < 0, we employ the same argument as in the proof of Lemma 3.1. By virtue of (3.8) and (3.10), we get Noticing that log p (t + x + 3R) ≤ log p−1 (T + 3R)w −1 (x, t) by (3.9), we obtain Therefore, we get Next, we shall estimate J 12 . It follows from (4.1) that The estimates of this integral are the same as those of J 11 , in which α and β are replaced with each other. Hence, we get Next, we shall estimate J 13 and J 14 . The definition of (4.1) and (3.9) gives us It follows from (3.13), (3.12) and Next, we shall estimate J 21 and J 22 in D Ext ∪ D Ori . Since |t − x| ≤ R holds for (x, t) ∈ D Ext , we have Thus, the estimates of above integrals are the same as those of J 13 and J 14 . Moreover, since (x, t) ∈ D Ori is bounded, we have Making use of log(t + |x| + 3R) ≥ 1 and (3.12), we get the desired estimates. Hence, we obtain Case 3. a < 0 and b > 0, or a < 0 and b = 0, or a < 0 and b < 0.
We shall show the estimate for J 11 in D Int . The definition of w in (4.1) and the trivial inequality (3.10) yield Making use of (3.8) for the above integral, we get Thus, (4.7) implies Next, we shall estimate J 12 . It follows from (4.1) that The estimates of this integral are the same as those of J 11 , in which α and β are replaced with each other. Hence, we get Next, we shall estimate J 13 and J 14 . It follows from (4.1) and (3.9) that Hence, because of (3.13) and (3.12), we get Next, we shall estimate J 21 and J 22 in D Ext ∪ D Ori . Since |t − x| ≤ R holds for (x, t) ∈ D Ext , we have for t − x > 0 in D Ext . Thus, the estimates of above integrals are the same as those of J 13 and J 14 . Moreover, since (x, t) ∈ D Ori is bounded, we have Making use of t + |x| + 3R ≥ 1 and (3.12), we get the desired estimates. Hence, we obtain Summing up all the estimates, we obtain (4.5). Therefore the proof of (4.4) is now established. ✷ Proofs of Theorem 1.1 and Theorem 1.3. We shall employ the same argument as in the proof of Theorem 2.2 of [7]. We consider an integral equation: where L and u 0 are defined in (2.3) and (2.2) respectively. Let us define a Banach space which is equipped with a norm (4.2). We shall construct a solution of the integral equation (4.8) in a closed subspace Analogously to the proof of Theorem 1.2 in [7] with M := C 2 , we can see that U n ∈ Y 2 (n ∈ N) provided the inequality holds, where C 3 , E 2 (T ) are defined in (4.4). Moreover, {U n } is a Cauchy sequence in Y 2 provided the inequality holds, where D(T ) is the one in (4.3). When a > 0, we can easily find ε 1 in Theorem 1.1, or c and ε 3 in Theorem 1.3, because of D(T ) = 1. When a = 0 and b > 0, the conditions (4.9) and (4.10) follow from where we set When a = b = 0, since p(p − 1)/(p + 1) < p − 1, the conditions (4.9) and (4.10) follow from where we set When a = 0 and b < 0, the estimate in Theorem 1.3 is derived by Here ε 3 has to satisfy When a < 0 and b > 0, the conditions (4.9) and (4.10) follow from where we set When a < 0 and b = 0, the estimate in Theorem 1.3 is derived by Here ε 3 has to satisfy Finally, when a < 0 and b < 0, since p(p − 1)/(−pa − b) ≤ (p − 1)/(−a), the conditions (4.9) and (4.10) follow from where we set In this section, we shall investigate the upper bounds of the lifespan. We note that they are determined by point-wise estimates of the solution in the interior domain, D Int . In fact, it follows from (1.16) and (2.2) that In this section, we assume that Making use of Lemma 2.2 and introducing the characteristic coordinate by (3.7), we have that Employing this integral inequality, we shall estimate the lifespan from above. We also use the following lemma. Then, Proof. The definition of M n yields log M n+1 = log C − n log(ηµ) + p log M n .
Proof of Theorem 1.4
Let u = u(x, t) ∈ C 2 (R × [0, T )) be a solution of (1.3). It follows from (1.19), (5.2) and (5.5) that for (x, t) ∈ D, where Case 1. a > 0 and a + b < 0, or a ≤ 0 and b < 0. First, we consider only the case where a > 0 and a + b < 0. Assume that an estimate holds, where a n ≥ 0 and M n > 0. The sequences {a n } and {M n } are defined later. Then it follows from (5.9) and (5.10) that and b < 0, we get Therefore, if {a n } is defined by a n+1 = pa n + 1, a 1 = 0, (5.13) then (5.10) holds for all n ∈ N as far as M n satisfies In view of (5.8), we note that (5.10) holds for n = 1 with Therefore, it follows from (5.13) that According to a n < p n−1 /(p − 1), (5.6) and (5.15), we define {M n } by Hence, making use of Lemma 5.1, we reach to Therefore, we obtain If there exists a point (x 0 , t 0 ) ∈ D such that we have u(x 0 , t 0 ) = ∞ by letting n → ∞, so that T < t 0 . Let us set 2x 0 = t 0 and 6R < t 0 . Then because the inequality holds for 6R < t 0 . The condition (5.17) follows from The proof for a > 0 and a + b < 0 is now completed. Next, we consider the case where a ≤ 0 and b < 0. Assume that holds. Then, similarly to the above computations with (5.9), we have Making use of (5.11), we get This estimate is almost same as (5.12). Therefore, the same lifespan estimate is obtained except for the included constant independent of ε. ✷ Case 2. a < 0 and b = 0. Assume that an estimate holds, where a n is the one in (5.16). In this case, M n is defined by (5.6) with C = C 2 := C 0 (p − 1) 2 (1 − a) −1 > 0 and η = µ = p. We note that the same argument as in Case 1 can be applicable to also this case. It follows from (5.8) and (5.18) that For the α-integral, we obtain Replacing β by (t − x) in the quantity above, we get Therefore, we obtain and S p is the one in (5.7). If there exists (x 0 , t 0 ) ∈ D such that K 2 (x 0 , t 0 ) > 0, then we have u(x 0 , t 0 ) = ∞ as before. Set 2x 0 = t 0 and 2 + t 0 ≥ 16R 2 . Then for 0 < ε ≤ ε 4 , where ε 4 is defined by The proof for a < 0 and b = 0 is now completed. ✷ Case 3. a < 0 and b > 0.
Hence K 3 (x 0 , t 0 ) > 0 follows from This inequality provides us the desired estimate as before. The proof for a < 0 and b > 0 is now completed. ✷ Case 4. a = 0 and b > 0.
Assume that an estimate holds, where a n is the one in (5.16). In this case M n is defined by (5.6) with C = C 4 := C 0 (p − 1) 2 > 0 and η = µ = p. Making use of (5.8) and (5.21), we have .
For the α-integral, we have that t+x β log pan 1 + α 1 + β which gives us Therefore, we have and S p is the one in (5.7). As in Case 3, let us fix (x 0 , t 0 ) ∈ D such that t 0 − x 0 = R + 1 and t 0 > (R + 2) 2 .
Then we have Hence, K 4 (x 0 , t 0 ) > 0 follows from This inequality provides us with the desired estimate as before. The proof for a < 0 and b > 0 is now completed. ✷ Case 5. a = 0 and b = 0.
Then we have which is the desired estimate as before. The proof for a = 0 and b = 0 is now completed. ✷ Case 6. a > 0 and a + b = 0.
In this case, we employ so-called "slicing method" which was introduced by Agemi, Kurokawa and Takamura [1]. Let us set We note that D n+1 ⊂ D n for all n ∈ N. Assume that an estimate u(x, t) ≥ M n log 1 + t − x 1 + l n R an in D n (5.24) holds, where a n is the one in (5.16). The sequence {M n } with M n > 0 is defined later. Then (5.8) and (5.24) imply that Let (x, t) ∈ D n+1 . Then, we get Note that l n R < λ n (α) < α for all n ∈ N holds, where λ n (α) := (1 + α)(1 + l n R) 1 + l n+1 R − 1.
Then we have that Due to 1 < l n < 2 for all n ∈ N and R > 1, the β-integral is estimated as follows: Therefore, we can employ the same argument as in Case 1 with a n in (5.16) and M n defined by M n+1 = C 6 2 −n p −n M p n , M 1 = C g ε, where C 6 := C 0 (p − 1)/2 · 3 1−b > 0. Here, we assume t − x > 2R = lim n→∞ l n R. Making use of Lemma 5.1 with C = C 6 , η = 2 and µ = p, we obtain and S p is the one in (5.7). As in Case 1, let us fix (x 0 , t 0 ) ∈ D ∞ such that t 0 = 2x 0 and t 0 > 4(1 + 2R) 2 .
Hence K 6 (x 0 , t 0 ) > 0 follows from which is the desired estimate as before. The proof for a > 0 and a + b = 0 is now completed. Therefore the proof of Theorem 1.4 finishes. ✷
Proof of Theorem 1.5
The proof is almost similar to the one of Theorem 1.4. Let u = u(x, t) ∈ C 2 (R × [0, T )) be a solution of (1.3). Since the assumption on the initial data in (1.20) yields It follows from (5.2), (5.3), (5.5) and for (x, t) ∈ D, where D and C 0 are defined in (5.1) and (5.4) respectively, and (5.28) For (5.28), we may assume that without loss of generality. Because, if not, we have to assume f (z) ≡ 0 in z ∈ (0, R) and to change the definition of D in which x > 0 is replaced with x < 0. For such a case, we obtain all the estimates below with −x instead of x. In fact, taking f (x + t) instead of f (x − t) in (5.25), we have, instead of (5.9), that Case 1. a > 0 and a + b < 0.
Since J ′ in (5.28) is estimated from below by it follows from (5.27) and (5.29) that Therefore, we can employ the same argument as Case 1 of Theorem 1.4 in which the constant C g ε in (5.9) is simply replaced with C f,1 ε p . ✷ Case 2. a = 0 and b < 0.
Since J ′ in (5.28) is estimated from below by it follows from (5.26) and (5.29) that We employ the "slicing method" again. Assume that an estimate holds, where a n ≥ 0, b n > 0 and M n > 0. Here, D n and l n are defined in (5.23 Let (x, t) ∈ D n+1 . Note that l n R < λ n < t − x for all n ∈ N holds, where Then we have It follows from Hence, (5.33) holds for all n ∈ N provided a n+1 = pa n + 1, a 1 = 0 b n+1 = pb n , b 1 = 1 and M n+1 ≤ C 7 2 −2n M p n , M 1 = C f,2 ε p , where C 7 := C 0 /72 > 0. Therefore, we define a n is the one in (5.16), b n = p n−1 and M n as above. Making use of Lemma 5.1 with C = C 7 and η = µ = 2, for (x, t) ∈ D ∞ , we obtain and S p is the one in (5.7). Let us fix (x 0 , t 0 ) ∈ D ∞ such that Then, we obtain , which is the desired estimate as before. This completes the proof for a = 0 and b < 0. ✷ Case 3. a < 0 and b < 0.
Then we obtain which is the desired estimate as before. This completes the proof for a < 0 and b < 0. ✷ Case 4. a < 0 and b = 0. Hence we define a n by the one in (5.16) and set b n = a n+1 . Moreover, M n is defined by M n+1 = C 9 p −2n M p n , M 1 = C f,4 ε p , where C 9 := C 0 (p − 1) 2 /(1 − a)p 2 > 0. Therefore, making use of Lemma 5.1 with C = C 9 and η = µ = p, we obtain, by (5.39), that Let us fix (x 0 , t 0 ) ∈ D such that t 0 = 2x 0 and t 0 ≥ 16R 2 .
Therefore, we obtain the desired estimate as before. This completes the proof for a < 0 and b > 0. ✷ Case 6. a = 0 and b > 0. | 8,341.2 | 2022-04-01T00:00:00.000 | [
"Mathematics"
] |
Refining short-range order parameters from the three-dimensional diffuse scattering in single-crystal electron diffraction data
This study compares, for the first time, short-range order parameters refined from the diffuse scattering in single-crystal X-ray and single-crystal electron diffraction data.
S2.1. Long-range order model
For the long-range order model, a cell with a size of 6 × 6 × 6 NbCoSb unit cells (cell parameter a = 5.89864(3) Å, space group F4 ̅ 3m (Zeier et al., 2017)) was created, and 1/6 of the Nb atoms were replaced by vacancies to form the B1 structure as defined in (Roth et al., 2020).Each Sb atom was moved by 0.148 Å towards its neighbouring vacancy, and each Co atom was moved by 0.128 Å away from its neighbouring vacancy (displacements refined from the Bragg reflections in the single-crystal X-ray diffraction data of the slowly cooled sample Nb0.81CoSb (SC-0.81))(Roth et al., 2021).The Sb atoms were moved along the cubic <100> directions, while the Co atoms were moved along the cubic <111> directions (Fig. 1(b)).The resulting B1 cell (Fig. S1) has cell parameter a = 35.3918(2)Å and space group P1.A structure with a size of 6×6×6 B1 cells was created in DISCUS.
In (Roth et al., 2021), the BD structure was used instead of the B1 structure.The BD structure is a combination of the B1 structure and the A2 structure.The BD, B1 and A2 structures are defined in (Roth et al., 2020).Because we noticed that the diffraction patterns calculated from the B1 structure agree better with the experimental diffraction patterns than the ones calculated from the BD structure, we used the B1 structure instead of the BD structure.
S2.2. Short-range order model
For the short-range order model, a starting structure with a size of 25 × 25 × 25 NbCoSb unit cells (cell parameter a = 5.89864(3) Å, space group F4 ̅ 3m (Zeier et al., 2017)) was created.1/6 of the Nb atoms were randomly selected and replaced by vacancies.Periodic boundary conditions were imposed to avoid edge effects.
A Monte Carlo simulation in DISCUS is used to minimize the energy E of the crystal until the target correlations between nearest neighbour vacancy pairs ( (1/2,1/2,0) ) and next-nearest neighbour vacancy pairs ( (1,0,0) ) are achieved.The energy E of the crystal is defined as (Roth et al., 2020): ) , vac
𝑖=1
(1) with vac the number of vacancies in the crystal.The summation in the first term is over all 12 nearest neighbour (NN) vacancy sites j of vacancy i, whereas the summation in the second term is over all six next-nearest neighbour (NNN) vacancy sites j' of vacancy i. = 1 if site j is occupied by a vacancy and = 0 if site j is occupied by a Nb atom.Similarly, ′ = 1 if site j' is occupied by a vacancy and ′ = 0 if site j' is occupied by a Nb atom. 1 is the energy assigned to a nearest neighbour vacancy pair, and 2 is the energy assigned to a next-nearest neighbour vacancy pair.
Each Monte Carlo step, two randomly selected Nb atoms/vacancies are switched positions.When the new configuration has a lower energy E, then it is always accepted.When the new configuration has a higher energy E, then it is only accepted when the transition probability P, given by is less than a random number η, chosen uniformly in the range [0,1].∆E is the energy difference between the new and the old configuration, T is the temperature, and k is Boltzmann's constant.The temperature T controls the proportion of accepted modifications that lead to a higher energy E. If T = 0, only changes that decrease the energy E will be accepted.The higher the temperature T, the more moves will be accepted that lead to a higher energy & Proffen, 2008).The number of Monte Carlo cycles was chosen equal to 1000 times the number of atoms within the crystal.
The short-range order model in DISCUS was calculated for kT = 0.001 (Equation 2).If T = 0, only changes that decrease the energy E of the crystal will be accepted.The higher the temperature T, the more moves will be accepted that lead to a higher energy E. Fig. S2 shows the diffuse scattering in the h0l plane calculated for different values of kT.Differences in the sharpness of the diffuse scattering can be explained by differences between the target and the achieved correlation coefficients.The diffuse scattering was calculated for a target correlation between nearest neighbour vacancies of (1/2,1/2,0) = −0.20 and a target correlation between next-nearest neighbour vacancies of (1,0,0) = −0.10.For kT = 0.001, the achieved correlations are (1/2,1/2,0) = −0.18 and (1,0,0) = −0.09,while for kT = 1, the achieved correlations are (1/2,1/2,0) = −0.16 and (1,0,0) = −0.07.The achieved correlations are thus lower for higher values of kT, which explains the differences in the calculated diffuse scattering.For each crystal, the diffuse scattering was also averaged over 50 lots with a size of 12 × 12 × 12 unit cells.The diffuse scattering was calculated for a target correlation between nearest neighbour vacancies of (1/2,1/2,0) = −0.20 and a target correlation between next-nearest neighbour vacancies of (1,0,0) = −0.10.
S2.3. Monte Carlo refinement
A Monte Carlo refinement in DISCUS was used to refine the target correlation between next-nearest neighbour vacancy pairs ( (1,0,0) ), the target distance between a vacancy i and a neighbouring Sb atom k ( ), and the target distance between a vacancy i and a neighbouring Co atom k' ( ′ ).Each refinement cycle, the short-range order parameters ( (1,0,0) , and ′ ) are adjusted, and the model crystal is recalculated.The diffuse scattering is calculated and compared with the observed diffuse scattering.This process is repeated until the best agreement between calculated and observed diffuse scattering intensities is obtained.
The differential evolutionary algorithm (Price et al., 2005) mimics the changes in a plant or animal population according to the Darwinian principle of natural evolution.The algorithm starts with a group of M members (parents).Each member represents a set of N short-range order parameters.
Next, the algorithm creates a new group of M members (children) by adjusting the short-range order parameters of their parents.The parents and the children with the lowest R-values (Equation 4) survive and will be the parents of the new generation (survival of the fittest).This procedure is repeated for a number of refinement cycles (generations) until the R-value converges to its minimum (Neder & Proffen, 2008).
The sum is over all measured reciprocal lattice points Qi, Iobs and Icalc are the observed and calculated diffuse scattering intensities.The weights were set to unity so that all data points i contribute equally to the summation.
S3. Dynamical refinement of the average crystal structure
S6. The 3D-∆PDF
The x0z plane of the experimental 3D-∆PDF in Fig. 5 is almost identical for the thermally quenched sample (Q-0.84 #2) and the slowly cooled sample (SC-0.81).A positive peak is found at the origin since the distance of an atom to itself is always zero.Strong negative features are visible at interatomic vectors (0.5,0,0.5) and (1,0,0 finding two nearest or two next-nearest neighbour vacancies is also lower in the real structure than in the average structure.Fig. S19 shows the x0z plane of the 3D-∆PDF for longer interatomic distances. Strong positive features are visible at interatomic vectors (1.5,0,1.5),(2,0,0) and (3,0,0), showing the preferred distances between the vacancies.Besides, the magnitudes of the 3D-∆PDF features decrease more quickly for the thermally quenched sample than for the slowly cooled sample, which means that the correlation length of the local Nb-vacancy order is longer for the slowly cooled sample than for the thermally quenched sample.
Figure S1
Figure S1 B1 structure, as defined by (Roth et al., 2020), showing the Nb-vacancy ordering in the long-range order model.Sb and Co atoms are omitted for clarity.
Figure S2
Figure S2 Diffuse scattering in the h0l plane calculated for a) kT = 0.001, b) kT = 0.5, c) kT = 1 and d) kT = 5.The diffuse scattering was averaged over ten crystals with a size of 25 × 25 × 25 unit cells.
Figure S3
Figure S3 Comparison of the h0l plane from single-crystal X-ray and single-crystal electron diffraction, both for the thermally quenched sample (Q-0.84 #2) and the slowly cooled sample (SC-0.81).The top row shows the experimental diffuse scattering; the bottom row shows the diffuse scattering calculated in Scatty from the structure models calculated in DISCUS.The experimental single-crystal X-ray diffraction data were previously reported by (Roth et al., 2021).
Figure S4
Figure S4Comparison of the h0.5l plane from single-crystal X-ray and single-crystal electron diffraction, both for the thermally quenched sample (Q-0.84 #2) and the slowly cooled sample (SC-
Figure S5
Figure S5 Comparison of the hhl plane from single-crystal X-ray and single-crystal electron diffraction, both for the thermally quenched sample (Q-0.84 #2) and the slowly cooled sample (SC-0.81).The top row shows the experimental diffuse scattering; the bottom row shows the diffuse scattering calculated in Scatty from the structure models calculated in DISCUS.The experimental single-crystal X-ray diffraction data were previously reported by (Roth et al., 2021).Angular broadening of the Bragg reflections is mainly due to crystal mosaicity.
Figure
Figure S6 (a) X-ray and (b) electron atomic form factors of Co, Nb and Sb as a function of d*=2sin()/λ.With d the distance between the lattice planes, the scattering angle and λ the wavelength.
Figure S7
Figure S7 Symmetry with Laue class m3 ̅ m was applied to the three-dimensional diffuse scattering calculated in Scatty.The effect of the vacancy distribution on the h0l plane is shown in Fig. S8.The h0l plane of a perfectly ordered NbCoSb crystal without vacancies shows sharp Bragg reflections at integer hkl values [Fig.S8(a)].In Nb0.84CoSb, 1/6 of the Nb sites are occupied by vacancies.When the vacancy distribution is random and when there are no displacements of Sb and Co atoms, a broad diffuse background will be visible [Fig.S8(b)], which is called monotonic diffuse Laue scattering (Warren et al., 1951).The short-range Nb-vacancy order in Fig. S8(c) results in highly structured diffuse scattering between the Bragg reflections, whereas the long-range Nb-vacancy order in Fig. S8(d) results in sharp satellite reflections.
Figure S8
Figure S8 Structure models and their corresponding calculated single crystal X-ray diffraction patterns.(a) The calculated h0l plane of a perfectly ordered NbCoSb crystal without vacancies shows sharp Bragg reflections at integer hkl values.(b) Nb0.84CoSb structure with a random vacancy distribution and without displacements of Sb and Co atoms.The random vacancy distribution gives rise to monotonic diffuse Laue scattering.(c) Nb0.84CoSb structure with correlations between nearest and next-nearest neighbour vacancies.Displacements of Sb and Co atoms are indicated by arrows.The short-range Nb-vacancy order in (c) results in highly structured diffuse scattering between the Bragg reflections, whereas the long-range Nb-vacancy order in (d) results in sharp satellite reflections.Fig. S9(a) shows the diffuse scattering calculated for a Nb0.84CoSb crystal with only occupational disorder (correlations between nearest and next-nearest neighbour vacancies).The intensity of the diffuse scattering for a crystal with only occupational disorder decreases with increasing scattering angle.Fig. S9(b) shows the diffuse scattering calculated for a Nb0.84CoSb crystal with only displacive disorder (displacements of Sb and Co atoms around the vacancies).The diffuse scattering for a crystal with only displacive disorder shows asymmetries with respect to the Bragg reflections.It should be noted that the diffuse scattering calculated for the crystal with only displacive disorder looks different from the one reported in the Supporting Information of (Roth et al., 2021), which was calculated using a custom Python script.The observed diffuse scattering in the h0l plane in Fig. 3 is thus due to both occupational and displacive disorder (Fig. S9(c)).
Figure S9
Figure S9 Structure models and their corresponding calculated single crystal X-ray diffraction patterns.(a) The calculated h0l plane for a Nb0.84CoSb crystal with only occupational disorder.Correlations between nearest and next-nearest neighbour vacancies give rise to the observed diffuse scattering.(b) The calculated h0l plane for a Nb0.84CoSb crystal with only displacive disorder.Displacements of Sb and Co atoms give rise to the observed diffuse scattering.(c) The highly structured diffuse scattering in the h0l plane is due to both occupational and displacive disorder.
Figure S14
Figure S14 Monte Carlo refinement applied to the diffuse scattering in the h0l plane from singlecrystal X-ray diffraction data of the thermally quenched sample (Q-0.84 #2).Evolution of (a) the Rvalue, (b) the target correlation between next-nearest neighbour vacancies ( (1,0,0) ), (c) the target distance between a vacancy i and a neighbouring Sb atom k ( ), and (d) the target distance between a vacancy i and a neighbouring Co atom k' ( ′ ).The figure shows the average value (blue) and the smallest and highest value (red) at each refinement cycle.The value with the lowest R-value at each refinement cycle is shown in black.
Figure S15
Figure S15 Monte Carlo refinement applied to the diffuse scattering in the h0l plane from threedimensional electron diffraction (3D ED) data of the thermally quenched sample (Q-0.84 #2).Evolution of (a) the R-value, (b) the target correlation between next-nearest neighbour vacancies ( (1,0,0) ), (c) the target distance between a vacancy i and a neighbouring Sb atom k ( ), and (d) the target distance between a vacancy i and a neighbouring Co atom k' ( ′ ).The figure shows the average value (blue) and the smallest and highest value (red) at each refinement cycle.The value with the lowest R-value at each refinement cycle is shown in black.
Figure
Figure S16 (a) h0l plane reconstructed from three-dimensional electron diffraction (3D ED) data acquired on the slowly cooled sample (SC-0.81).(b) h0l plane after removing the Bragg reflections.(c) x0z plane of the three-dimensional difference pair distribution function (3D-∆PDF).Positive 3D-∆PDF features are red and negative features are blue.
Figure S17
Figure S17 Comparison of the x0.27z plane of the X-ray and electron three-dimensional difference pair distribution function (3D-∆PDF), both for the thermally quenched sample (Q-0.84 #2) and the slowly cooled sample (SC-0.81).The 3D-∆PDF was reconstructed from the three-dimensional diffuse scattering data of which the h0l plane is shown in Fig. 3.The top row shows the 3D-∆PDF of the
Figure S18
Figure S18 Comparison of the h0l plane reconstructed from single-crystal X-ray diffraction and three-dimensional electron diffraction (3D ED) after removing the Bragg reflections, both for the thermally quenched sample (Q-0.84 #2) and the slowly cooled sample (SC-0.81).
Figure S19
Figure S19 Comparison of the x0z plane of the X-ray and electron three-dimensional difference pair distribution function (3D-∆PDF), both for the thermally quenched sample (Q-0.84 #2) and the slowly cooled sample (SC-0.81).The 3D-∆PDF was reconstructed from the three-dimensional diffuse scattering data of which the h0l plane is shown in Fig. 3.The top row shows the 3D-∆PDF of the
Figure S20
Figure S20 Comparison of the x0z plane of the simulated electron three-dimensional difference pair distribution function (3D-∆PDF) for two different Q-ranges (-8 ≤ h,k,l ≤ 8 and -20 ≤ h,k,l ≤ 20).The 3D-∆PDF maps were calculated from the simulated three-dimensional reciprocal lattice of the slowly cooled sample (SC-0.81).Positive 3D-∆PDF features are red and negative features are blue.
During the Monte Carlo simulation, the values of the distances and ′ in Equation3are adjusted.When the target distances and ′ are achieved, the Lennard-Jones potential energy will achieve its minimum.Each Monte Carlo step, one Sb atom is moved towards its neighbouring vacancy and one Co atom is moved away from its neighbouring vacancy.When the new configuration has a lower energy E, then it is always accepted.When the new configuration has a higher energy E, then it is only accepted when the transition probability P in Equation 2 is less than a random number η, chosen uniformly in the range [0,1] (Neder
Table S1
Average structure refinement for the thermally quenched sample (Q-0.84 #2).The dynamical refinement from the Bragg reflections in three-dimensional electron diffraction (3D ED) data acquired on three different crystals is compared with the reference refinement from the Bragg reflections in single-crystal X-ray diffraction data(Roth et al., 2021).Refined atomic displacement parameters for Sb, Co and Nb (split model in Table1). | 3,698.8 | 2024-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Research on Scientific Data Literacy Education System
This article analyzes the connotation of scientific data literacy, uses literature research methods and case analysis methods to investigate and analyze the scientific data literacy education practices of universities or institutions, and proposes the scientific data literacy teaching objectives, teaching objects, education models and education strategies. It has important theoretical value and practical reference.
Introduction
Under the E-Science environment, the paradigm of scientific research has changed.Following experimental science, theoretical science, and computational science, a fourth research paradigm called "data-intensive science" emerged.
Under this type of scientific research model, "data drives scientific development, science is data, and data is science" [1].Scientific data is not only the product of scientific research activities, but also the foundation of scientific research activities.It contains a lot of research value and use value.Scientific data has become an important academic information resource.Researchers have explored new scientific research results through the analysis of a large amount of scientific data and promoted scientific development.
The formation of an intensive data environment has caused the scientific data produced by universities to grow in volume in terms of quantity, variety, and speed.Researchers face a series of data management issues, such as data management planning, data citation, data publishing, and ethical use of data, etc.How to cite this paper: Qi, Z. (2018) Research on Scientific Data Literacy Educa-
Research Review
From the perspective of literature research, the foreign library community has focused on the role and methods of library scientific data literacy training.For example, Koltay emphasized the importance of data literacy in the mission of university libraries, and pointed out that data literacy should have a unified term [4]; Frank et al. introduced the meteorological disciplines as an example to introduce the data literacy education of university libraries [5]; Dillo paid attention to the joint data management of archives and libraries.Training activities, the establishment of the FrontOffice-BackOffice model [6]; Maybee using a grounded theory to review the syllabus of the nutrition science and political science of Purdue University, analysis of students' information literacy and data information literacy demand [7].Most foreign research libraries have already carried out corresponding scientific data literacy education activities and diversified their teaching forms, including offering elective courses, seminars, and online courses, such as Harvard University, Yale University, and University of Virginia.The University of Virginia Library's data literacy education curriculum is well-established.It is designed longitudinally according to the data life cycle, and is rolled out horizontally in different subject areas to provide specialized training for researchers in specific disciplines.
Chinese university libraries have realized the importance of scientific data management and have begun exploring data management in the construction of institutional knowledge bases such as Peking University, Tsinghua University, Xiamen University, Xi'an Jiaotong University, and Wuhan University.In 2014, led by Fudan University, nine domestic university libraries jointly initiated the data, and using data [10].From the above analysis, scientific data literacy is similar to information literacy, including data awareness, data management knowledge, and data management skills.At the same time, scientific data literacy has a periodic nature, emphasizing the collection, processing, and evaluation of scientific data.Management and utilization activities focus on the various skills required to manage data that are required in the basic scientific research process.
In addition, scientific data literacy emphasizes the ability to analyze data, present data, and use data management tools.
Specific to the subject area, the requirements for scientific data literacy ability are more specific and more specific.For example, sociology emphasizes the ability of data collection and statistical analysis, and economics specializes in designing quantitative economics courses, emphasizing data analysis and modeling capabilities.Bioinformatics emphasizes the ability to use computer as a tool to store, retrieve, and analyze biological information.In the field of journalism, the Digital News Center of the School of Journalism at the School of Journalism at Columbia University has targeted the new position of "data journalist" and proposed that the six hard skills that journalists in the post-industrial era should possess include data and statistical capabilities, and master user analysis, tool capabilities and data analysis skills [11].The data literacy in the subject area has embedded characteristics.This embeddedness is reflected in the cooperation of teaching methods.For example, teachers from the Department of Sociology at the University of California, Los Angeles, collaborate with librarians in data lite-racy education [12], and professional teachers teach scientific research methodologies with professional knowledge, the librarians impart skills such as data collection, storage and management, and work together to give full play to their respective advantages.
Scientific Data Literacy Teaching Objective
Researchers face new requirements for managing, sharing and supervising data, so having the appropriate knowledge and skills is crucial.However, data management ability education is often not used as the content of the student's curriculum system.The disconnection between scientific data management needs and researcher data management capabilities has become a major obstacle to data protection and the full use of scientific data.Most of the university library's data literacy education has clear goals.For example, the goal of Purdue University's data literacy program is to establish the foundation of library science data literacy education, and to develop appropriate librarians based on the appropriate data management skills in the subject area.Data literacy courses and programs provide standard processes.The MIT Library's data management training goal is to help researchers learn scientific data management skills, including the basics of scientific data management, scientific data file organization, and version control.The UK Data Management Center (DCC) believes that under the e-Science research environment, research libraries conduct data management training and data literacy education to help researchers arm their knowledge in data sharing, preservation, and long-term access skills, and as a goal to guide the development of various activities.
Scientific Data Literacy Teaching Object
Graduate students and researchers are the main objects of data literacy education in university libraries.Graduate students are the bearers and successors of future scientific research work.The problem they face is how to change from a student role to a professional field researcher.It is of great significance for graduate students to adapt to study life and participate in scientific research practice.It is also indispensable for researchers who have engaged in front-line research to receive data management training.They are faced with huge data processing and preservation issues.How to develop data management plans to meet the data management and sharing requirements of funding agencies and publishers, how to manage data in a standardized manner for future discovery and reuse, how to implement data security and backup and long-term preservation, and how to comply with data ethics and ethics.Problems require the librarians to provide the necessary assistance and support.
1) Embedded in scientific research activities for data literacy education
The embedded scientific data literacy education model embeds the content of scientific data literacy education in the teaching of specialized courses and regards the library scientific data literacy education as an integral part of the curriculum objectives of various subjects.It not only completes professional teaching, but also requires students to master the knowledge of scientific data management.And skills, and use scientific data to solve professional problems.The University of California, Los Angeles Library embeds data literacy and information literacy education in a sociology program that is coordinated by a librarian and a data archivist based on syllabus [13].The DMTpsych Data Literacy course at York University helps psychology researchers and researchers learn data management and develop data management plans [14].The Purdue University Library has conducted research on data literacy education for researchers in fields such as agriculture and bioengineering.The members are composed of data librarians, subject specialist librarians, and members of the discipline [15].The Data Train of the Cambridge University Library Data Literacy Education Project is targeted at archeologists and socio-anthropologists [16].It can be seen that the embedded scientific data literacy education model requires that library data librarians cooperate with professional teachers, jointly conduct curriculum and lesson plan design, and undertake scientific data literacy education module teaching.This puts high demands on the professionalism and understanding structure of library data librarians.
2) Whole-course education based on the life cycle of scientific data The scientific data life cycle stems from the life cycle of scientific research and can be broadly divided into five stages: data acquisition, data production, data storage and management, data preservation and sharing, data citation and publication.The library can develop a full-process data literacy education model based on the life cycle of scientific data.The "data collection phase" is the start-up phase of scientific research projects.This phase is mainly to provide training and lectures on data resources and provide data support for project initiation.This is the strength of university libraries.At this stage, the library can introduce the basics of data management and assist in the development of data management plans.The "data production phase" and the "data storage and management phase" run through the entire research phase of the scientific research project.This phase produces a large amount of raw data and experimental or survey data, and stores and manages the data.The library can be integrated into research.In the team, explain data statistics, how to use analysis software, conduct data analysis and use metadata to describe the professor of data sets.At the same time, data management is performed according to the original data management plan.This requires higher academic background and scientific research ability for librarians.It also requires a large amount of investment by university libraries to obtain sufficient hardware and software support, including technical support, hardware equipment, and personnel strength.The "data preservation and sharing" stage is to fully protect and utilize the scientific data produced.University libraries have unique advantages in this regard and should assume their own responsibilities.The "data reference and publication phase" is the final stage of the scientific data life cycle.At this stage, university libraries mainly carry out education on data reference academic norms, which can increase data reusability and sharing ability, and increase the recognition of data producers.The verification of the scientific research process is the proper meaning of the university library.The university library can explain the data ethics and citation normative knowledge through specific cases, and can also train data mining and data correlation technologies to promote the reuse of data.Make the most of the data.The education model is suitable for "minority" education, that is, a specific project group to participate in a specific project.The education model is also a kind of "practical" education, focusing on the ability to use data resources to solve practical research problems in the scientific research process. 3
Scientific Data Literacy Education Strategy
1) Develop related data literacy education according to disciplines Discipline is the most natural and basic basis for differentiating user groups in university libraries.Researchers of different specialities have different needs for scientific data literacy skills.For example, engineering disciplines are based on experimental data and can focus on experimental data statistics, analysis, and visualization of data.Research on humanities and social sciences mainly relies on survey data, government-disclosed statistical data, etc., and can focus on the collection of data and education on evaluation capabilities.Specialization features are especially reflected in the embedded education model based on interdisciplinary cooperation.If you do not have professionalism and do not design tutorial content according to professional characteristics, the embedded education model will lose its significance and effect.Therefore, the library's scientific data literacy education should emphasize the characteristics of disciplines, differentiate the design of scientific data literacy education curriculum content according to the different needs of the discipline users, and develop data literacy education for researchers in specific disciplines. 2
) Conduct heuristic education to develop a comprehensive understanding of the data
The cultivation of data literacy should be based on rigorous scientific research purposes, using the data life cycle as a guideline, heuristic education as a means, and data manipulation to foster a comprehensive understanding of the data.The capabilities covered by data literacy are closely linked to the data life cycle.The data management training education generally adopted by foreign universities is also based on the data life cycle.Athanases pointed out that the data literacy education and data services provided by university libraries should use the complete data life cycle as the guideline to cultivate users' understanding of the data life cycle in a series of processes from data acquisition to conversion application, and in the course of data management.Deepen understanding [17].In addition, the education based on data life cycle should also focus on cultivating critical thinking of users.The reason is that the data collection and transformation are carried out under certain data processing conditions.The data is limited because of the limitations of data processing conditions.The ground will produce different degrees of error, and the critical thinking of the data can also be simply understood as the data provider must be responsible for the data he provides, so that the data provider must timely review the data life cycle, under the appropriate conditions for data Re-acquisition, correction or deletion to ensure the rationality of relevant data research.In terms of specific educational methods, critical thinking also to some extent reflected in the understanding of data sources, including how to properly produce, read graphics and charts, draw correct conclusions from existing data, identify data is misunderstood and Improper use and so on.Under the circumstances that the scientific research environment is constantly changing, university libraries must fully understand the urgent needs of scientific researchers in data management, combine data life education with data literacy education, and use heuristic education to help researchers manage and use data in order to cultivate their good data literacy.Scientific data integrity education model for scientific researchers.Integrate scientific data literacy education in research integrity education, focusing on the importance of scientific data for scientific research for scientific researchers, avoiding academic misconduct caused by data fraud, enhancing the reliability of scientific research results, and opening up new areas of academic norms.Improve the construction of scientific research integrity system.It can thus be seen that the scientific data literacy education in the library is not isolated, but rather chimeric, and the library's other services work collaboratively.
4) Focus on cultivating researchers' scientific research capabilities
Scientific research ability is an indispensable and important part of scientific data literacy education.Therefore, scientific data literacy education should focus on cultivating the ability of researchers to conduct scientific research.Knowledge can be taught, and the ability must be obtained by the student through hands-on practice, exploration and summary under the guidance of the teacher.After studying basic theoretical knowledge related to scientific data literacy education, researchers must participate in scientific research projects to cultivate data collection, statistical analysis, management, and data security capabilities in scientific research.The whole-process education model based on the life cycle of scientific data is designed specifically on the basis of this feature.Through the whole process of scientific data life cycle education, researchers can firmly establish the awareness that data management is a necessary part of the scientific research process, in order to realize breakthroughs from data knowledge to data discovery and data innovation, and from students to real students.The transformation of senior professional scientific research personnel can realize the leap from engaging in scientific research under the guidance of mentors to creating new research fields independently.
Teaching Evaluation
The evaluation of data literacy courses mainly includes two aspects: teacher teaching and student learning outcome evaluation.The evaluation of teachers' teaching includes evaluation of the methods and models used in the teaching of data literacy to determine whether the expected goals of data literacy education have been achieved.For example, the Purdue University Library conducts focus group discussions after the end of course to assess the course results.The assessment of student learning outcomes is the evaluation of the improvement in data management knowledge and skills through data literacy teaching.For example, Cornell University and Cambridge University Library use the end-of-term assessment, learning achievement report (usually papers and assignments) to inspect students learning effect.
Conclusions/Suggestions
Based on the comprehensive research and practice, we can see that scientific data literacy education has risen in foreign research institutions, and domestic re-search has started relatively late.It is still in the stage of introduction, absorption, and digestion.In general, the following problems exist in the scientific data quality education of university libraries: 1) The theoretical level of the concept of scientific data literacy in libraries is still in its infancy, and no specific theoretical framework has been formed, and the design of specific paths is lacking.
2) Insufficient understanding of the importance of scientific data literacy education, lack of top-level design and policy support.
3) Some libraries have conducted scientific data education practices, but they lack extensive research on the actual needs of users, lack research on the "inter-institutional" collaborative development management mechanism, and lack specialized organizations and professional staff.
4) The library community lacks standardized consensus and collaborative management of scientific data management, scientific data literacy education, etc.There are no uniform standard and corresponding rules in the industry.
Each library often sets its own rules and self-contained system, which is difficult to carry out in-depth and effective external cooperation and exchange.
With regard to the problems existing in the practice of scientific data literacy education in university libraries, the scientific data literacy education in university libraries is urgently needed and we can carry out the following tasks: 1) Designing scientific data literacy education courses based on user needs The first is to conduct a literature survey on the theoretical and practical status of scientific data literacy education in university libraries, sum up experiences, analyze problems, and explore the feasibility and teaching model of scientific data literacy education in university libraries.Second, through the design of questionnaires, or semi-structured interviews to understand the actual needs of users.Only on the basis of full investigation and study, the basic theories of scientific data literacy education in university libraries can be scientifically and reasonably designed to meet the needs of users in the scientific data literacy education courses in university libraries, and then to meet the needs of users to achieve services conversion of user concept.
2) Multi-party cooperation to build an "inter-institutional" collaborative development management mechanism The scientific data literacy education in colleges and universities should proceed from the internal teaching and research work flow of the university, take the library as the leading factor, and take the management mechanism of multi-cooperation and "cross-organization" collaborative development.In this regard, many libraries provide many successful experiences and useful references.
) Use MOOC mode In recent years, the rise and development of the mass open online course (MOOC) has brought about a positive and significant impact on the global Z. Qi DOI: 10.4236/jss.2018.66017193 Open Journal of Social Sciences higher education and has become an effective complement to higher education.MOOC is learner-centered and provides learners with personalized education services that fully stimulate learners' subjective initiative and improve their learning effectiveness.Facing the reform of educational technology brought about by the rise of MOOCs, and the reform of educational concepts and teaching models, we should make full use of the current network environment and information exchange technologies, effectively take advantage of the MOOC platform, and seize the opportunity of providing embedded education to break professional teaching.The boundary between scientific data literacy education and the professional teachers' development of concept maps, design team projects, discussions, and assignments, and the integration of scientific data literacy-related teaching contents at any time in accordance with the professional curriculum system, providing learners with professional-related learning in due course, resources, information methods and other aspects of help and support to achieve professional teaching and scientific data literacy mutual penetration and improvement.This will enable learners to gradually improve their information utilization and research innovation capabilities while mastering professional curriculum knowledge.
3)
Provide personalized data literacy education courses According to the different needs of different users to provide corresponding scientific data literacy education, including: Scientific data education model for most users.The information quality education course for undergraduates is the main field of scientific data general education.It mainly introduces the basic theory and methods of scientific data, enabling learners to understand scientific data and gradually cultivate data awareness.Discipline data literacy education model for high-level users (teachers, doctors, masters, etc.).It mainly provides specialized lectures and trainings for specific disciplines.It can rely heavily on subject librarians and rely on academic services and academic platforms.The massive growth and the urgent need to develop scientific data have also added new content to the discipline services, and they complement each other.For example, the Wuhan University Library has embedded a scientific data management module in its academic service platform, which has played an exemplary role for Chinese university libraries.Personalized scientific data education model for specific users.The development of big data enables scientific data education to be truly personalized.The library can build an online scientific data literacy education platform based on adaptive learning technology, realize user self-organization learning, self-adjustment learning, and can push learning based on user characteristics.Resources, and follow up in time, automatically adjust.Reference consulting services are also an important means for achieving personalized scientific data education.
For example, Cornell University's Research Data Management Group (RDMSG) is a multi-academic collaborative effort to help create and implement data management plans, apply best practices to manage data, and provide data management services at any stage of the research process.The Oxford Institute for Emerging Institutional Data Monitoring Services (EIDCSR) consists of several research institutes within the University of Oxford including the University Z. Qi DOI: 10.4236/jss.2018.66017197 Open Journal of Social Sciences Computing Center (project hosting, research and consulting), Research Service Office (policy research), Podlin Library (metadata management) and scientific research project team (participating in research) work together to complete.Therefore, in the construction of a scientific data literacy education platform, the library shall collaborate with the school information construction center, and actively cooperate with the relevant departments of the school to improve the functional modules of the platform and achieve as convenient and interactive as possible.In addition, in the design of data literacy teaching programs, the library can work with professional teachers throughout the process to embed data literacy education content into professional curriculum design and teaching practices, share teaching tasks, and conduct education on subject embedded science data.3)Organization and staff guaranteeThe construction of the staff team and organization is the precondition for the library to carry out scientific data literacy education.Many university libraries have set up specialized data management agencies and set up relevant positions based on their responsibilities and service priorities.The New York University Library has established a data service studio and has set up 5 positions: the data service coordinator, Data Services and Public Policy Librarians, Data Services Assistant Librarians, Data Services High Commissioners, and Data Services Librarians.The Johns Hopkins University Library's Digital Research and Supervision Center has seven positions: Senior Data Education Expert, Academic Communication Expert, and so on.The construction of personnel and organizations for scientific data education should be strengthened.On the one hand, the training and re-education of data service librarians should be strengthened on the basis of existing subject librarians and consulting librarians to form a powerful specialized data education hall, staff team.On the other hand, we must strengthen the construction of library professional data service organizations, coordinate and coordinate the library's data services, and under the guidance and guidance of the collaborative development management mechanism, gather the strength of all parties to expand library data management and service.Scientific data literacy education is a new field for university libraries, not only across disciplines but also across traditional university library organizations.University libraries should adapt to the development of the data era, actively change their concepts, fully recognize the importance of scientific data, attach importance to the development of data librarians, and strive to explore scientific data literacy education models and educational content, and gradually establish awareness of data cultivation as a guide.Based on data knowledge and skills, a more complete innovation system of scientific data literacy education for the purpose of scientific data standard application is demonstrated, highlighting the new mission and new value of university libraries in the E-Science environment. | 5,516 | 2018-05-23T00:00:00.000 | [
"Education",
"Computer Science"
] |
Attenuation of circulatory shock and cerebral ischemia injury in heat stroke by combination treatment with dexamethasone and hydroxyethyl starch
Background Increased systemic cytokines and elevated brain levels of monoamines, and hydroxyl radical productions are thought to aggravate the conditions of cerebral ischemia and neuronal damage during heat stroke. Dexamethasone (DXM) is a known immunosuppressive drug used in controlling inflammation, and hydroxyethyl starch (HES) is used as a volume-expanding drug in cerebral ischemia and/or cerebral injury. Acute treatment with a combined therapeutic approach has been repeatedly advocated in cerebral ischemia experiments. The aim of this study is to investigate whether the combined agent (HES and DXM) has beneficial efficacy to improve the survival time (ST) and heat stroke-induced cerebral ischemia and neuronal damage in experimental heat stroke. Methods Urethane-anesthetized rats underwent instrumentation for the measurement of colonic temperature, mean arterial pressure (MAP), local striatal cerebral blood flow (CBF), heart rate, and neuronal damage score. The rats were exposed to an ambient temperature (43 degrees centigrade) to induce heat stroke. Concentrations of the ischemic and damage markers, dopamine, serotonin, and hydroxyl radical productions in corpus striatum, and the serum levels of interleukin-1 beta, tumor necrosis factor-alpha and malondialdehyde (MDA) were observed during heat stroke. Results After heat stroke, the rats displayed circulatory shock (arterial hypotension), decreased CBF, increased the serum levels of cytokines and MDA, increased cerebral striatal monoamines and hydroxyl radical productions release, and severe cerebral ischemia and neuronal damage compared with those of normothermic control rats. However, immediate treatment with the combined agent at the onset of heat stroke confers significant protection against heat stroke-induced circulatory shock, systemic inflammation; cerebral ischemia, cerebral monoamines and hydroxyl radical production overload, and improves neuronal damage and the ST in rats. Conclusions Our results suggest that the combination of a colloid substance with a volume-expanding effect and an anti-inflammatory agent may provide a better resuscitation solution for victims with heat stroke.
Background
Unless immediately recognized and treated, heat stroke is often lethal, and victims who do survive may sustain permanent neurological damage [1]. The clinical diagnosis of heat stroke is demonstrated when hyperthermia is accompanied with circulatory shock (arterial hypotension), intracranial hypertension, and cerebral ischemia and injury [2,3]. Meanwhile, the heat stroke-induced central nervous system dysfunction includes delirium, convulsion, or coma [4]. Hence, prolonging survival time in heat stroke victims may offer more sufficient time for urgent treatment, thereby ameliorating the heat stroke-induced damage.
Several lines of evidence indicate that rodents share with humans almost the same heat stroke syndromes, such as arterial hypotension, activated inflammation, and multiorgan dysfunction (in particular, cerebral ischemia, injury, and dysfunction [5][6][7]. In the rodents heat stroke model, significant decrements in both mean arterial pressure (MAP) and cerebral blood flow (CBF), but increments of cerebral monoamines levels, free radical productions and systemic cytokine levels are obtained in urethane-anaesthetized rats after heat stroke [8,9]. These pathophysiological changes are known to aggravate the conditions of cerebral ischemia and neuronal damage during heat stroke in rats [10]. Activated inflammation is evidenced by overproduction of proinflammatory cytokines (e.g., interleukin-1b (IL-1b) and tumor necrosis factor-α (TNF-α)) in circulation of heat stroke rats [11,12]. High levels of cytokines and radicals in the peripheral blood stream, as well as excessive accumulation of glutamate, hydroxyl radicals, dopamine (DA) and serotonin in the central brain, correlate with the severity of circulatory shock, cerebral ischemia and neuronal damage during heat stroke in rats [6,9,13,14].
Various clinical and experimental investigations have shown that single doses of dexamethasone (DXM; exogenous glucocorticoids) or hydroxyethyl starch (HES) are extensively used in the treatment of cerebral ischemia and/or cerebral injury [15][16][17]. In the studies of heat stroke, systemic treatment with DXM attenuated serum IL-1b levels and cerebral ischemia damage, and improved survival in heat stroke [18]. Additionally, the prolongation of survival in rats with HES therapy was found to be associated with augmentation of both arterial blood pressure and CBF as well as reduction of cerebral ischemia, hypoxia, and neuronal damage during heat stroke [19]. Although, many therapeutic agents show potential promise in many animal models, the results of most single-agent clinical trials are sobering. Consequently, various authors advocate studies to estimate the efficacy of combined therapeutic approaches [20,21]. Furthermore, there is less attention to evaluate immediate effects of both DXM and HES (the combined agent) on heat stroke-induced pathophysiological changes, let alone their neuroprotective underlying influences, especially in the aspects of striatal monoamines and hydroxyl radical production release. Based on these concepts, we propose whether application of the combined agent immediate treatment has efficacy to elongate the survival time, and improve the heat strokeinduced circulatory shock, cerebral ischemia, and neuronal damage in rats. Furthermore, we also attempt to ascertain whether the neuroprotective effects of the combined agent treatment are associated with inhibition of cerebral release of glutamate, DA, 5-HT, hydroxyl radicals, and the serum IL-1b, TNF-α and malondialdehyde (MDA) levels during heat stroke.
Experimental animals
Male Spraque-Dawley rats weighing between 300 and 350 g were obtained from the National Science Council of Republic of China (Taiwan). Between experiments the animals were housed individually at an ambient temperature of 24 ± 1°C with a 12-h light-dark cycle, with the lights being switched on at 0600 h. Animal chow and water were allowed ad libitum. All protocol were approved by the Animal Ethics Committee of the Chia-Nan University of Pharmacy and Science, Tainan, Taiwan (approbated no. CN-IACUC-94007) in accordance with the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health and the guidelines of the Animal Welfare Act. Adequate anesthesia was maintained to abolish the corneal reflex and pain reflexes by tail-pinching throughout all experiments (approximately 8 hr) by a single intraperitoneal dose of urethane (1.4 g.kg -1 b.w., i.p.). At the end of the experiments, control rats (and any rats that had survived heat stroke) were killed with an overdose of urethane. One hundred thirty-eight rats were used in this study. Fifty-three rats of 138 were used for examining in Table 1, Figure 1 and 2 (three premature deaths during heat stroke induction and two premature deaths during animal surgery). Forty-three rats of 138 were used for examining in Table 2 (three premature deaths during heat stroke induction). Forty-two rats of 138 were used for examining in Figure 3 and 4 (two premature deaths during heat stroke induction). No premature deaths during anesthesia.
Animal surgery and physiological parameter monitoring
Under urethane anesthesia, the right femoral artery of the rats was cannulated with polyethylene tubing (PE50) for physiological monitoring, the right femoral vein was also cannulated for blood sampling and drug administration. The animals were then positioned in a stereotaxic apparatus (Kopf model 1460) for measurement of CBF and microdialysis experiment. Physiological monitoring included colon temperature (TCO), MAP, heart rate (HR) and CBF values in the cerebral corpus striatum.
Experimental groups
Rats were randomly assigned to one of six major groups. One group of rats (n = 8) with heat stroke received normal saline (NS) treatment (1 or 11 ml/kg body wt, 0.9% NaCl solution, i.v.) at the onset of heat stroke. Heat stroke was induced by exposing the animals to an ambient temperature of 42°C (with a relative humidity of 60% in a temperature-controlled chamber). The moment in which MAP and local CBF began to sharply decrease from their peak levels was arbitrarily defined as the onset of heat stroke, as shown in Figure 1. The interval between the start of heat exposure and onset of heat stroke were taken as values of latency. The interval between the initiation of heat stroke onset and animal death were taken as values of survival time. Another group of rats (n = 8) with heat stroke respectively received DXM 4 mg/kg i.v., HES (10%, 11 ml/kg i.v., Fresenius AG, Bad Homburg, Germany), or the combined agent [DXM (4 mg/ml/kg) together with HES (10%, 11 ml/kg)] i.v. at the onset of heat stroke. The other group of rats (n = 8) were normothermic control rats which were exposed to an ambient temperature of 24°C for at least 90 min to reach thermal equilibrium. Their physiological parameters were continuously recorded for up to 480 min (at the end of experiment). Their colon temperatures were maintained at about 36°C using the electric thermal mat before the start of experiments. The rats of these groups were continually monitored from physiological parameters (such as T CO , MAP, HR, and CBF) and ST during heat stroke. According to the values of ST, rats treated with the combined agent displayed the best beneficial effect on prolongation of ST in Table 1. Consequently, investigation of the heat stroke-induced circulatory shock, cerebrovascular dysfunction, cerebral ischemia and neuronal damage would be emphasized by the influence of the combined agent administration.
Measurements of Cellular Ischemia and Injury Markers
After cannulation of vessels, the animal's head was mounted on a stereotaxic apparatus (Davis Kopf Instruments) with the nose bar positioned 3.3 mm below the horizontal line. Following a midline incision, the skull was exposed and a burr hole was made in the skull for the insertion of a dialysis probe (4 mm in length, CMA/ 12, Carnegie Medicine, Stockholm, Sweden). The microdialysis probe was stereotaxically implanted into the corpus striatum according to the atlas and coordinates of Paxinos and the coordinates of Paxinos and Watson (1982) [22]. As the methods described previously [19,23], an equilibrium period of 2 hours without sampling was allowed after probe implantation. The dialysis probe was perfused with Ringer's solution (147 mM Na + , 2.2 mM Ca 2+ , 4 mM K + , pH 7.0) at 2 μl/min using a CMA/100 microinfusion pump. Dialysates were collected every 20 min in a CMA140 fraction collector. Aliquots of dialysates (5 μl) were injected onto a CMA600 Microdialysis Analyzer (Carnegie Medicine) for measurement of lactate, glycerol, pyruvate and glutamate. Four analytes can be analyzed per sample and the result is displayed graphically within minutes. The thermal experiments were started after showing stabilization in four consecutive samples.
The lactate/pyruvate ratio is a well-known marker of cell ischemia, that is, an inadequate supply of oxygen and glucose. Glycerol is a marker of how severely cells are affected by the ongoing pathology. Glutamate is released from neurons during ischemia and initiates a pathological influx of calcium leading to cell damage. It is an indirect marker of cell damage in the brain, as described previously [19,23].
Measurements of Extracellular DA and 5-HT release
Dialysates samples were collected at 20 min intervals and assayed by an HPLC system. Extracellular monoamine concentrations were assayed by HPLC combined with an electrochemical detection system. The HPLC system comprised a Beckman 126 pump (BeckmanInstru-ments) and a CMA-200 microautosampler (CMA/ Microdialysis, Stockholm, Sweden), and a microbore reversed-phase column was filled with Inertsil ODS-2 (GSK-C18, 5-mmOD, 150 × 1.0-mmID). The performance of each microdialysis probe was calibrated by dialysis of a known amount of the standard mixture, and recovery of all analyses was then determined. Brain concentrations of DA and 5-HT were calculated by determining each peak height ratio relative to the Table 1 Effects of heat exposure (HE; Ta = 42°C) on both latency for the onset of heat stroke and survival time in rats treated with normal saline (NS), in rats treated with hydroxyethyl starch (HES), in rats treated with dexamethasone (DXM), and in rats treated with HES +DXM Values are the means ± SEM of 8 rats per group. Groups 2-6 exposed to 42°C had heat exposure withdrawn at the onset of heat stroke.*P < 0.05, compared with the corresponding control values (rats kept at 24°C; treatment 1). (one-way ANOVA, followed by Duncan's test). † P < 0.05, compared with the corresponding control values (rats treated with NS (11 ml/kg) and kept at 42°C; treatment 3). (one-way ANOVA, followed by Duncan's test). ‡ P < 0.05, compared with the corresponding control values (rats treated with HES (11 ml/kg) and kept at 42°C; treatment 4). (one-way ANOVA, followed by Duncan's test). # P < 0.05, compared with the corresponding control values (rats treated with DXM (4 mg/kg) and kept at 42°C; treatment 5). (one-way ANOVA, followed by Duncan's test). internal standard and were also corrected by each probe performance. The internal standard 3-methoxy-tyramine and standard mixtures were prepared fresh daily. The mobile phase was prepared by adding 60 ml of acetonitrile, 0.42 g of SDS (2.2 mM), 200 g of sodium citrate (30 mM), 10 mg of EDTA (0.027 mM), and 1 ml of diethylamine in double-distilled water.
Hydroxyl Radical Production Monitoring
The concentrations of hydroxyl radicals were measured by a modified procedure based on the hydroxylation of sodium salicylate by hydroxyl radicals, leading to production of 2,3-dihydroxybenzoic acid (2, Measurement of Serum MDA Levels 0.25 mL of serum was added to 25 μL of 0.2% BHT and 12.5 μL of 10 N NaOH (to adjust to pH~13) and incubated at 60°C for 30 min in a shaking water bath. To this was added 1.5 mL of 0.44 mol/L (or 7.2%) TCA containing 1% KI, and the mixture was placed in ice for 10 min and centrifuged (1,000g, 10 min). To 1 mL of the supernatant was added 0.5 mL of 0.6% TBA, and the mixture was heated at 95°C for 30 min. After cooling the mixture was extracted with 1.5 mL of n-butanol, and 20 μL of the butanol layer was injected to a C-18 (4.6 × 150 mm) column fitted with a guard and eluted at 1 mL/min by using 65% (v/v) 50 mM KH 2 PO 4 -KOH and 35% (v/v) methanol with spectrophotometric (532 nm) detector.
Measurement of Serum IL-1b and TNF-a Levels
The blood samples were acquired 100 min after the initiation of heat exposure (or 20 min after the onset of heat stroke) in heat stroke rats or the equivalent time in normothermic controls. 5 ml of blood was withdrawn from the femoral vein of each rat for measurement of serum IL-1b or TNF-α. Blood samples were allowed to clot for 2 hours at room temperature or overnight at 2-8°C before centrifuging for 20 minutes at approximately 2000 ×g. Serum was quickly removed from these plasma samples and assayed for IL-1b or TNF-α immediately. The DuoSet Enzyme-linked Immunosorbent Assay (ELISA) Development System rat IL-1b or TNF-α kit (R&D Systems, Minneapolis, MN, USA) was used for measuring the levels of active rat IL-1b or TNF-α present in serum. This assay employs the quantitative colorimetric sandwich ELISA technique.
Neuronal Damage Score
At the end of each experiment, the brain was removed, fixed in 10% neutral buffered formalin and embedded in paraffin blocks. Serial (10 μm) sections through the striatum were stained with hemotoxylin and eosin for microscopic evaluation. The extent of striatal neuronal damage was scored on a scale of 0-3, modified from the grading system of Pulsinelli et al. (1982) [24], in which 0 is normal, 1 means that~30% of the neurons are damaged, 2 means that~60% of that neurons are damaged, and 3 means that 100% of that neurons are damaged. Each hemisphere was evaluated independently without the examiner knowing the experimental conditions. When examined for neuronal damage in gray matter, only areas other than those invaded by probes were assessed.
Statistical analysis
Data are presented as the mean ± SEM. Repeated-measures analysis of variance was used for factorial experiments, whereas Duncan's multiple-range test was used for post hoc multiple comparisons among means. For scoring neuronal damage, the Wilcoxon signed rank test was used when only two groups were compared. The Wilcoxon tests which convert the scores or values of a variable to ranks, require calculation of a sum of the ranks and provide critical values for the sum necessary to test the null hypothesis at a given significant level. The data were given by "median", and first and third quartile. A P value less than 0.05 was considered as statistical significance.
Results
The combined agent (DXM+HES) improves survival time in heat stroke Additionally, although immediate treatment with DXM (4 mg/kg) alone had no apparent beneficial effect, the combined agent (combined administration of DXM (4 mg/kg) plus HES (11 mg/kg)) immediately after the onset of heat stroke did prolong the survival time as compared with the controls (as shown in Table 1).
The combined agent (DXM+HES) attenuates hypotension, cerebrovascular dysfunction and neuronal damage during heat stroke
The effects of heat exposure (42°C for 80 min) on several physiological parameters in NS-, DXM-, HES-and the combined agent-treated rats are shown in Figure 1.
In NS-treated (11 ml/kg) and DXM (4 mg/kg)-treated groups, the values of MAP and CBF were significantly decreased at 10 or 20 min after the onset of heat stroke (or 90 or 100 min after the start of heat stress) compared with those of normothermic controls. On the other hand, the values of extracellular concentrations of glutamate, glycerol and lactate/pyruvate ratio in the corpus striatum were significantly greater than those of the normothermic controls. In HES (11 ml/kg)-treated group, rats displayed greater MAP and CBF, but lower striatal levels of glutamate, glycerol, and lactate/pyruvate ratio after onset of heat stroke than those of the NS-or DXM-treated rats. However, heat stroke-induced arterial hypotension, cerebral ischemia and increased levels of glutamate, glycerol, and lactate/pyruvate ratio in the extracellular levels of striatum were all significantly diminished by treatment with the combined agent immediately at the onset of heat stroke or 80 min after start of heat stress.
In separate experiments, 25 min after the onset of heat stroke, rats were sacrificed for determination of neuronal damage score in the corpus striatum. The data are summarized in Table 2. After the onset of heat stroke, rats treated with NS (11 ml/kg) displayed higher values of striatal neuronal damage [2 (2, 2.25)] compared with those of normothermic controls [0 (0, 0.75)] (as shown in Table 2 and Figure 5B). Values of striatal neuronal damage score in the DXM-treated rats and in the HES-treated rats were respectively [2 (2, 2)] and [1 (0.25, 1)]. However, with the combined agent [DXM (4 mg/kg)+HES (11 ml/kg)] acute treatment, neuroprotection ensured [1 (0, 1)]. Figure 5 shows that heat stroke-induced cell shrinkage, pyknosis of nucleus, and disappearance of nucleolus in the corpus striatum were attenuated by the combined agent ( Figure 5C).
The combined agent (DXM+HES) reduces cerebral striatal levels of dopamine, serotonin and DHBA during heat stroke As shown in Figure 2, twenty minutes after the termination of heat stress in the NS-treated group or DXMtreated group, all the dopamine, serotonin, and DHBA (dihydroxybenzoic acid, indirectly stood for production of hydroxyl radicals) values in cerebral striatum were significantly greater than those of the normothermic controls (P < 0.05). Immediate treatment with the combined agent at the onset of heat stroke (80 min after the start of heat stress) significantly attenuated the heat stress-induced increases levels of dopamine, serotonin, and production of hydroxyl radicals in the corpus striatum.
The combined agent (DXM+HES) reduces the levels of MDA, IL-1b and TNF-a in peripheral blood stream during heat stroke The serum IL-1β and TNF-α, and MDA levels for normothermic controls, NS-, DXM-, HES-treated heat stroke rats, and the combined agent-treated heat stroke rats are summarized in Figure 3 and 4. It can be seen from the figures that the serum IL-1β and TNF-α, and MDA levels in NS-treated heat stroke rats were all significantly higher at 20 min after the onset of heat stroke than those in the normothermic controls. The immediate treatment with HES alone and the combined agent at the onset of heat stroke attenuated the heat stroke-induced increased serum lipid peroxidation, as well as attenuating it increased the serum levels of IL-1β and TNF-α. However, these serum levels were more significantly diminished by treatment with the combined agent immediately at the onset of heat stroke (as shown in Figure 3 and 4).
Discussion
It has been reported that pretreatment with DXM (4 mg/kg) single dose, but not immediate treatment with DXM, before heat stress could increase the ST in rats by attenuating serum levels of interleukins [18]; however, there are fewer studies showing the immediate treatment with the combined agent at the onset of heat stroke. It will be more meaningful the combined treatment shows neuroprotection after heat stroke attacks. Although the previous results [19] have shown an insignificant therapeutic effect of DXM (4 mg/kg) administered immediately after the onset of heat stroke alone, the combination of DXM and HES provides a better therapeutic effect for rats with heat stroke in the present study. Additionally, the volume-expanding effect of the HES would be thought to improve survival during heat stroke in resuscitation. The HES would be 1.5-2 times the volume administered by 1-2 hour after it is given. In fact, in our previous study, intravenous infusion of 2-11 ml/kg of HES solution improved survival in a dosedependent manner during heat stroke, and suggesting treatment with 11mg/kg of HES had better prolongation in survival. Hence, the HES that acts to expand the circulatory volume and a potent inflammatory agent (such as DXM) might be combined to develop an improved fluid therapy for attenuation or prevention of heat stroke-induced damage. Although our previous results indicated that the combination of HES and DXM did provide a better survival effect for rats with heat stroke, the hemodynamic, histological and biological changes by the combined treatment immediately after the onset of heat stroke were not observed in detail. In this study, administration of the combined agent indeed appears more effective to prolong the ST in rats with heat stroke, by comparison to treatment of DXM or HES alone (shown in table 1). Similarly, in agreement with the present results, treatment of the combined agent (DXM and HES) can also offer beneficial amelioration from ischemic condition and therapeutic influence in ischemic experiments [25,26]. There is evidence that cerebral ischemia (due to arterial hypotension and A B C Figure 5 Histological examination of neuronal damage. The photomicrographs of the cerebral corpus striatum in a normothermic control rat treated with 0.9% NaCl solution (11 ml/ kg) (A), a heat stroke rat treated with 0.9% NaCl solution (11 ml/kg) (B), or a heat stroke rat treated with the combined agent (DXM +HES) (C) immediately after the initiation of heat stroke. The striatal photomicrograph in a heat stroke rat treated with DXM (4 mg/kg) is similar to (B), and in a heat stroke rat treated with HES (10%, 11 ml/ kg) is similar to (C) (data not shown). Twenty-five minutes after 80min heat exposure, the corpus striatum of the rat treated with 0.9% NaCl solution showed cell shrinkage, pyknosis of the nucleus, and disappearance of nucleolus. After acute treatment with the combined agent, neuronal damage was reduced, as shown in C. The rats were sacrificed at 25 min after the termination of heat exposure or the equivalent time for the normothermic controls. Scale bar, 50 μm.
intracranial hypertension) may be one of the major causes to induce further damage after heat stroke onset [13,18,19]. After heat stroke induction, the CBF instantly drops from highest peak, and it is concomitant with significant increments of cerebral ischemia and injury indexes, as shown in figure 1. The lactate/pyruvate ratio is a well known marker of cellular ischemia, whereas glycerol is a marker of how severely cells are affected by ongoing pathology [27]. Excessive accumulation of glutamate has been shown in ischemic brain tissue [27]. Indeed, both present and previous results [19,23,27] have demonstrated that extracellular levels of glutamate, glycerol and lactate/pyruvate in ischemic brain are greater in heat stroke rats compared with those of normothermic controls. Meanwhile, evidences of histopathological morphology and neuronal damage scores also reveal severe neuronal damage (shown in figure 5, table 2) in heat stroke rats. However, as shown in the present results, all these heat stroke-induced cerebral ischemia and injury can be alleviated by acute treatment with the combined agent.
There were many evidences [8,14,28] that the increased DA, 5-HT and glutamate in the brain during the rat heat stroke were mediated in the development of neuronal damage. Cerebral DA, 5-HT or/and glutamate overload resulting from arterial hypotension and intracranial hypertension might be responsible for occurrence of central nervous system syndromes associated with heat stroke [14,28]. Systemic administration of dopaminergic or serotoninergic nerve depletors or receptor antagonists, or glutamate receptor antagonists cloud protect against ischemic neuronal injury in experimental heat stroke [14,28,29]. In addition, recent studies showed that the excessive accumulation of cytotoxic free radicals in the brain and oxidative stress occurred during heat stroke [9,10,30]. Evidence had accumulated to suggest that heat stroke-induced cerebral ischemia and neuronal damage might be associated with an increased production of free radicals, specifically hydroxyl radicals [9,10]. Pretreatment with hydroxyl radical scavengers, such as α-tocopherol, prevented production of hydroxyl radicals, reduced lipid peroxidation and ischemic neuronal damage in the brain of rats exposed to heat stroke and prolonged subsequent survival [31]. In brief, as demonstrated by Chang et al, after the onset of heat stroke, cessation or reduction of blood flow to the brain induced neuronal damage. This neurotoxic cascade involved overproduction of glutamate, DA, and 5-HT as well as oxidative stress in the brain [6]. Likewise, in the present study, heat stroke also produces similar increases in cerebral striatal DA, 5-HT, glutamate and hydroxyl radical production in heat stroke rats. Additionally, the heat stroke rats also displayed increased levels of lipid peroxidation in the peripheral blood stream. Indeed, according to our present findings, the heat stroke-induced high levels of DA, 5-HT, glutamate, and hydroxyl radicals in rats' corpus striatum, and the elevated plasma MDA levels can be prevented by acute treatment with the combined agent. This probably implies that the immediate administration of the combined agent during heat stroke may be mediated with the decrements of cerebral monoamines and oxidative stress to prolong the ST and improve the cerebral neuronal damage in rats.
The serum concentrations of inflammatory cytokines (such as IL-1b and TNF-α) are elevated in humans and animals with heat stroke [12,18,23,32]. The levels of both TNF-α and IL-1 receptors correlate well with severity of heat stroke [32,33]. The previous studies had also shown that heat stroke induced systemic and cerebral striatal productions of IL-1b and TNF-α in both rats and rabbits [9,31,34,35]. Indeed, as it is shown in the present results, an increase of serum IL-1b and TNF-α levels is observed in heat stroke rats. The increase in the levels of these inflammatory cytokines is associated with arterial hypotension, cerebral ischemia and neuronal damage. Administration of IL-1 receptor antagonists could prevent arterial hypotension and cerebral ischemic damage, and improve survival in heat stroke. Furthermore, the present results show that treatment with the combined agent significantly attenuates the heat stroke-induced overproduction of IL-1b and TNF-α in the serum. Meanwhile, both arterial hypotension and cerebral ischemic damage are attenuated and survival of heat stroke rats is ameliorated following acute treatment with the combined agent. The immediate administration of this combined agent might exert its protective effects by attenuating the increased plasma level of IL-1b and TNF-α during heat stroke.
Our results indicated that following heat stroke, arterial hypotension, decreased cerebral blood flow, increased serum levels of IL-1b, TNF-α and MDA, and increased striatal dopamine, serotonin and hydroxyl radicals and increased of levels of glutamate, glycerol and lactate/pyruvate ratio developed. Although HES administration alone showed a pronounced effect, it was found that treatment with the combined agent conferred a moderate further beneficial effect to ameliorate these changes, and improve neuronal damage and survival time. Various clinical and experimental investigations of stroke and brain injury have shown that HES administration might reduce brain edema and intracranial hypertension [36][37][38]. It was also obtained that the values of MAP, cerebral perfusion pressure (CPP), and cerebral levels of local CBF were significantly lower during heat stroke [6,27]. The maintenance of appropriate levels of CBF might be brought about by higher CPP resulting from lower intracranial pressure (ICP; possibly due to reduction in brain edema and cerebovascualr congestion) and higher MAP during development of heat stroke [39]. This raises the possibility that HES might be a beneficial treatment for heat stroke subjects with intracranial hypertension as well as decreased cerebral perfusion. As a result of present study, we see from Figure 1, 2, 3 and 4, and Table 1 and 2 that acute treatment with HES (11 ml/kg) alone at the onset of heat stroke can alleviate the heat stroke-induced arterial hypotension, cerebral monoamines and hydroxyl radical production overload, systemic inflammation, and severe cerebral ischemia and damage. However, treatment with the combined agent (both HES and DXM) has more effective therapy than treatment with HES alone to maintain appropriate levels of MAP and CBF by attenuating the heat stroke-induced abnormal physiological and pathological changes, and results in prolongation in survival (as shown in Figure 1, 2, 3 and 4, and Table 1). Therefore, HES treatment showed partial effects on those parameters after heat stroke induction. According to our present results, it is reasonable to assume that acute treatment with both HES and DXM has a better effectiveness on reducing the heat stroke-induced damage, and augmenting ST. It is not known whether HES treatment exerts its benefit effect in heat stroke by acting through attenuation of brain edema and intracranial hypertension in present study. Of course, this needs further investigation.
Conclusions
In the present study, the heat stroke-induced increases in arterial hypotension, cerebral ischemia and neuronal damage are associated with elevated levels of DA, 5-HT, glutamate and hydroxyl radicals in rat brain, and increased circulating IL-1b, TNF-α and MDA in the peripheral blood stream. The immediate systemic treatment with the combined agent (both DXM and HES), in addition to attenuating the elevating levels of IL-1b, TNF-α and MDA in blood stream, diminishes monoamines, glutamate, and hydroxyl radical formation, and ischemia injury in the brain, and improves ST in rats with heat stroke. Our results suggest that the combination of a colloid substance with a volume-expanding effect and an anti-inflammatory agent may provide a better resuscitation solution for victims with heat stroke. | 6,966.4 | 2010-10-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
Continuous-wave optical parametric oscillation tunable up to 8 μm wavelength
We demonstrate the first cw OPO emitting mid-infrared light at wavelengths up to 8 μm. This device is based on a 3.5-mm-diameter whispering gallery resonator made of silver gallium selenide (AgGaSe2) pumped by a compact distributed feedback laser diode emitting light at 1.57 μm wavelength. Phase-matching is achieved for a c-cut resonator disk pumped with extraordinarily polarized light at this wavelength. The oscillation thresholds are in the mW region, while the output power ranges from 10 to 800 μW. Wavelength tuning is achieved via changing the radial mode number of the pump wave and by changing the resonator temperature. Simulations predict that whispering gallery OPOs based on AgGaSe2 with diameters around 2 mm can generate idler waves exceeding 10 μm wavelength.
Introduction
Continuous-wave optical parametric oscillators (cw OPOs) convert cw pump light at the wavelength λ p to signal and idler light at λ s,i in a non-centrosymmetric crystal such that photonenergy conservation is fulfilled, i.e. 1/λ p = 1/λ s + 1/λ i . Fundamentally, cw OPOs can be tuned to every output wavelength as long as it is larger than the one of the pump wave. This great potential of wavelength tunability is combined with narrow-linewidth emission and even with watt-level output powers employing conventional mirror-based systems. Thus, they are ideally suited for high-resolution laser spectroscopy [1]. A practical limit for the tuning range of cw OPOs is given by the transparency of the nonlinear-optical crystal they are based on. Operating them at wavelengths beyond the transparency range one will face significantly higher oscillation thresholds and lower conversion efficiencies.
The great majority of cw OPOs is based on oxide crystals. In these materials, the upper wavelength limit of the transparency range is given by multi-phonon resonances starting around 4.5 µm. Consequently, the mid-infrared tuning range of cw OPOs has been limited to wavelengths below 5.5 µm for more than half of a century [2].
Here, we present a continuous-wave optical parametric oscillator based on a whispering gallery resonator made of silver gallium selenide (AgGaSe 2 ). This material can be considered as transparent between 0.8 and 17 µm wavelength [3]. Thus, it is suitable for the generation of highly tunable mid-infrared light. For whispering gallery OPOs, the above mentioned photon-energy conservation has to be supplemented by the phase-matching condition m p =
Experimental setup and results
The setup sketched in Fig. 1a,b and described in detail in Ref. [5] by Meisenheimer et al. comprises a disk resonator with 3.5 mm diameter and a coupling prism made of silicon.
The pump light at 1.57 µm wavelength provided by a compact diode laser is converted in the resonator to signal and idler light. The latter two and the remaining pump light are coupled out via the prism and analyzed with respect to powers and wavelengths. At pump powers exceeding the oscillation threshold of typically some mW, we observe optical parametric oscillation. By addressing pump resonances belonging to the different mode numbers q p (number of field maxima along the radial direction), different branches of signal and idler wavelengths between 2 and 8 µm can be selected. Temperature variation enables 100-nm-wide wavelength tuning within one branch of the idler light (see Fig. 1c). The measured tuning branches are in good accordance with theoretical predictions. For a resonator diameter of 2 mm, we expect to extend the tuning range even beyond 10 µm wavelength. At 10 mW pump power, the output power P s of the signal wave ranges from 100 to 800 µW, depending on the selected branch, i. e. the selected parametric process. For the idler wave, we estimate the output power by using the relation P i = (λ s /λ i )P s . This is valid if both generated waves are coupled out of the resonator equally. The resulting values are in the range between several 10 and several 100 µW. They can be considered as lower limits for the idler power. Due to the longer wavelength, the coupling efficiency of the idler wave will be stronger than the one of the signal wave. The arrow indicates the optic axis (o.a.) of the crystal. b) Photograph of the resonator placed close to the coupling prism. c) Wavelength tuning of the idler light. The simulated tuning branches (solid lines) are labeled with the respective radial mode numbers q p,s,i and the phasematching number ∆m as (q p , q s , q i , ∆m).
Conclusion
We consider these results to be a first step towards establishing continuous-wave optical parametric oscillators as light sources in the mid-infrared range beyond 5.5 µm wavelength. Further improvements with regard to output power and wavelength tuning are expected to be achieved with recently improved materials, e.g. orientation-patterned gallium arsenide and gallium phosphide. | 1,120 | 2017-06-01T00:00:00.000 | [
"Physics"
] |
Vertical integration of high-Q silicon nitride microresonators into silicon-on-insulator platform
We demonstrate a vertical integration of high-Q silicon nitride microresonators into the silicon-on-insulator platform for applications at the telecommunication wavelengths. Low-loss silicon nitride films with a thickness of 400 nm are successfully grown, enabling compact silicon nitride microresonators with ultra-high intrinsic Qs (∼ 6× 106 for 60 μm radius and ∼ 2×107 for 240 μm radius). The coupling between the silicon nitride microresonator and the underneath silicon waveguide is based on evanescent coupling with silicon dioxide as buffer. Selective coupling to a desired radial mode of the silicon nitride microresonator is also achievable using a pulley coupling scheme. In this work, a 60-μm-radius silicon nitride microresonator has been successfully integrated into the silicon-on-insulator platform, showing a single-mode operation with an intrinsic Q of 2×106. © 2013 Optical Society of America OCIS codes: (130.3120) Integrated optics devices;(230.5750) Resonators. References and links 1. G. T. Reed and A. P. Knights, Silicon Photonics: An Introduction (John Wiley, 2004). 2. A. W. Fang, H. Park, O. Cohen, R. Jones, M. J. Paniccia, and J. E. Bowers, “Electrically pumped hybrid AlGaInGs-silicon evanescent laser,” Opt. Express 14, 9203–9210 (2006). 3. A. Liu, R. Jones, L. Liao, D. Samara-Rubio, D. Rubin, O. Cohen, R. Nicolaescu, and M. Paniccia, “A high-speed silicon optical modulator based on a metal-oxide-semiconductor capacitor,” Nature (London) 427, 615–618 (2004). 4. Q. Xu, B. Schmidt, S. Pradhan, and M. Lipson, “Micrometer-scale silicon electro-optic modulator,” Nature (London) 435, 325–327 (2005). 5. L. Chen and M. Lipson, “Ultra-low capacitance and high speed germanium photodetectors on silicon,” Opt. Express 17, 7901–7906 (2009). 6. F. Xia, M. Rooks, L. Sekaric, and Y. Vlasov, “Ultra-compact high order ring resonator filters using submicron silicon photonic wires for on-chip optical interconnects,” Opt. Express 15, 11934–11941 (2007). 7. F. Xia, L. Sekaric, and Y. Vlasov, “Ultracompact optical buffers on a silicon chip,” Nature Photon. 1, 65–71 (2007). 8. M. Borselli, T. Johnson, and O. Painter, “Beyond the Rayleigh scattering limit in high-Q silicon microdisks: theory and experiment,” Opt. Express 13, 1515–1530 (2005). 9. M. Soltani, S. Yegnanarayanan, and A. Adibi, “Ultra-high Q planar silicon microdisk resonators for chip-scale silicon photonics,” Opt. Express 15, 4694–4704 (2007). 10. Q. Lin, O.J. Painter, and G.P. Agrawal, “Nonlinear optical phenomena in silicon waveguides: Modeling and applications,” Opt. Express 15, 16604–16644 (2007). 11. J. F. Bauters, M. J. R. Heck, D. John, D. Dai, M. C. Tien, J. S. Barton, A. Leinse, R. G. Heideman, D. J. Blumenthal, and J.E. Bowers,“Ultra-low-loss high-aspect-ratio Si3N4 waveguides,” Opt. Express 19, 3163–3174 (2011). #192394 $15.00 USD Received 17 Jun 2013; revised 15 Jul 2013; accepted 15 Jul 2013; published 22 Jul 2013 (C) 2013 OSA 29 July 2013 | Vol. 21, No. 15 | DOI:10.1364/OE.21.018236 | OPTICS EXPRESS 18236 12. M. C. Tien, J. F. Bauters, M. M. R. Heck, D. J. Blumenthal, and J. E. Bowers, “Ultra-low loss Si3N4 waveguides with low nonlinearity and high power handling capability,” Opt. Express 18, 23562–23568 (2010). 13. G. Roelkens, D. Van Thourhout, R. Baets, R. Notzel, and M. Smit,“Laser emission and photodetection in an InP/InGaAsP layer integrated on and coupled to a Silicon-on-Insulator waveguide circuit,” Opt. Express 14, 8154-8159 (2006). 14. M. Ghulinyan, R. Guider, G. Pucker, and L. Pavesi, “Monolithic whispering-gallery mode resonators with vertically coupled integrated bus waveguide,” IEEE Photon. Technol. Lett. 23, 1166–1168 (2011). 15. F. Ramiro-Manzano, N. Prtljaga, L. Pavesi, G. Pucker, and M. Ghulinyan,“A fully integrated high-Q whisperinggallery wedge resonator,” Opt. Express 20, 22934-22942 (2012). 16. J. F. Bauters, M. L. Davenport, M. J. R. Heck, J. K. Doylend, A. Chen, A. W. Fang, and J. E. Bowers, “Silicon on ultra-low-loss waveguide photonic integration platform,” Opt. Express 21, 544–555 (2013). 17. T. Barwicz, M. A. Popovic, P. T. Rakich, M. R. Watts, H. A. Haus, E. P. Ippen, and H. I. Smith, “Microringresonator-based add-drop filters in SiN: fabrication and analysis,” Opt. Express 12, 1437–1442 (2004). 18. E. S. Hosseini, S. Yegnanarayanan, A. H. Atabaki, M. Soltani, and A. Adibi, “High quality planar silicon nitride microdisk resonators for integrated photonics in the visible wavelength range,” Opt. Express 17, 14543–14551 (2009). 19. A. Gondarenko, J.S. Levy, and M. Lipson, “High confinement micron-scale silicon nitride high Q ring resonator,” Opt. Express 17, 11366–11370 (2009). 20. M. A. Foster, J. S. Levy, O. Kuzucu, K. Saha, M. Lipson, and A. L. Gaeta,“Silicon-based monolithic optical frequency comb source,” Opt. Express 19, 14233-14239 (2011). 21. J. F. Bauters, M. J. R. Heck, D. D. John, J. S. Barton, C. M. Bruinink, A. Leinse, R. G. Heideman, D. J. Blumenthal, and J. E. Bowers, “Planar waveguides with less than 0.1 dB/m propagation loss fabricated with wafer bonding,” Opt. Express 19, 24090–24101 (2011). 22. M. C. Tien, J. F. Bauters, M. J. R. Heck, D. T. Spencer, D. J. Blumenthal, and J. E. Bowers, “Ultra-high quality factor planar Si3N4 ring resonators on Si substrates,” Opt. Express 19, 13551–13556 (2011). 23. D. T. Spencer, Y. Tang, J. F. Bauters, M. J. R. Heck, and J. E. Bowers,“Integrated Si3N4/SiO2 ultra high Q ring resonators,” in Photonics Conference (Institute of Electrical and Electronics Engineers, Burlingame, CA, 2012), 141–142. 24. F. H. P. M. Habraken, LPCVD Silicon Nitride and Oxynitride Films: Material and Applications in Integrated Circuit Technology (Springer, 1991). 25. H. A. Haus, Electromagnetic Fields and Energy (Prentice Hall, 1989). 26. Q. Li, A. A. Eftekhar, Z. Xia, and A. Adibi, “Azimuthal-order variations of surface-roughness-induced mode splitting and scattering loss in high-Q microdisk resonators,” Opt. Lett. 37, 1586–1588 (2012). 27. Q. Li, A. A. Eftekhar, P. Alipour, A. H. Atabaki, S. Yegnanarayanan, and A. Adibi, “Low-loss microdisk-based delay lines for narrowband optical filters,” IEEE Photon. Technol. Lett. 24, 1276–1278 (2012). 28. E. S. Hosseini, S. Yegnanarayanan, A. H. Atabaki, M. Soltani, and A. Adibi, “Systematic design and fabrication of high-Q single-mode pulley-coupled planar silicon nitride microdisk resonators at visible wavelengths,” Opt. Express 18, 2127–2136 (2010). 29. Q. Li, M. Soltani, S. Yegnanarayanan, and A. Adibi, “Design and demonstration of compact, wide bandwidth coupled-resonator filters on a silicon-on-insulator platform,” Opt. Express 17, 2247-2254 (2009). 30. Z. Xia, A. A. Eftekhar, M. Soltani, B. Momeni, Q. Li, M. Chamanzar, S. Yegnanarayanan, and A. Adibi, “High resolution on-chip spectroscopy based on miniaturized microdonut resonators,” Opt. Express 19, 12356-12364 (2011). 31. C. W. Holzwarth, T. Barwicz, and H. I. Smith, “Optimization of hydrogen silsesquioxane for photonic applications,” J. Vac. Sci. Technol. B 25, 2658-2661 (2007).
Introduction
The silicon-on-insulator (SOI) platform has enabled impressive advances in many areas of integrated photonics [1]. For applications such as on-chip interconnects at the telecommunication wavelength range (i.e., 1300 − 1550 nm), key components, including hybrid silicon (Si) lasers [2], high-speed modulators and switches [3,4], hybrid detectors [5], band-pass filters [6] and optical buffers [7] have all been demonstrated. Despite these impressive achievements, Si faces some major challenges arising from its inherent material properties. For example, Si cannot compete with silicon dioxide (SiO 2 ) or silicon nitride (SiN) for some passive devices such as low-loss delay lines in terms of insertion loss and power handling capability, for the following two reasons: (1) Si devices have a relatively large propagation loss (typically > 10 dB/m limited by scattering loss) due to their large refractive index contrast and small mode volume [8,9]; (2) Si has strong nonlinear effects at high light intensities because of a large third-order nonlinear coefficient and also the free carriers generated by the two-photon absorption process [10]. On the other hand, SiO 2 or SiN devices have one or two orders of magnitude smaller propagation loss (0.1 − 1 dB/m) [11] and one order of magnitude smaller nonlinear coefficient with no free carrier loss [12], resulting in much smaller nonlinear effects than Si devices under the same power level. Despite these advantages, active elements such as modulators and phase shifters are difficult to realize in SiO 2 or SiN due to the difficulty in tuning the refractive index of the host material, while Si devices can be easily tuned using the thermo-optic or the plasma-dispersion effect [1].
Therefore, a coherent integration of SiN and Si can naturally lead to more effective and functional devices, especially for applications requiring low-loss performance (e.g., delay lines) or high power handling (e.g., frequency comb sources). For this purpose, vertical integration can be used, which has been proven to be an efficient method to integrate materials with different properties [13,14,15]. In Ref. [16], a Si-on-SiN integrated platform has already been demonstrated for waveguides using a bonding process. In this work, we integrate high-Q SiN microresonators into the SOI platform by depositing SiN on top of SOI wafers. Such a SiN-on-SOI platform can preserve the quality of each material (especially for Si), and the fabrication process is also simpler compared to that in the bonding approach. Successful demonstration of low-loss coupled devices in this hybrid platform requires: 1) low-loss SiN films to enable high-Q SiN microresonators, and 2) optimal coupling between the SiN microresonators and Si waveguides in the SOI platform. In the rest of this paper, we focus on our approach in addressing these two requirements. Section 2 is focused on the demonstration of high-Q SiN microresonators on SiO 2 , and Section 3 is devoted to the details of integration of the SiN layer to the SOI platform. Final conclusions are discussed in Section 4.
High-Q SiN microresonators on SiO 2
Within the areas of integrated photonics, SiN has been investigated for a range of applications. Initially, silicon-rich SiN is used as its low stress allows a thick SiN film (> 400 nm) without cracks [17]. Recently, the use of stoichiometric SiN (i.e., Si 3 N 4 ) is also actively pursued [18]. Thick SiN films up to 700 nm have been grown, and compact microresonators with radii down to 20 μm and intrinsic Qs (i.e., unloaded Qs) around 3 million have been demonstrated [19,20]. Another approach employs a thin layer of SiN (< 100 nm) as the guiding core, and ultralow-loss waveguides (with propagation loss less than 0.1 dB/m) have been demonstrated [21]. High-Q microresonators are also reported based on the same platform (with intrinsic Qs up to 55 million for a 9.8-mm-radius microring), though the bending radii have to be in the order of millimeters to avoid significant radiation loss [22,23].
For the purpose of dense integration, the SiN film has to be thick enough to enable compact microresonators and sharp waveguide bends. Numerical simulations show that a 400-nm-thick SiN layer permits the realization of microresonators with a radiation-limited Q more than a billion for radii as small as 40 μm. Low-pressure chemical vapor deposition (LPCVD) method is conventionally used for stoichiometric SiN deposition at temperatures around 800 o C with standard source gases of dichlorosilane (SiH 2 Cl 2 , or DCS) and ammonia (NH 3 ). The gas ratio between dichlorosilane and ammonia is the dominant factor for both the film stress and the material absorption, and a tradeoff exists between these two material properties. For example, low-stress SiN films can be grown using a large dichlorosilane to ammonia ratio (e.g., 3 − 6) with thicknesses up to a few micrometers [24], but such films usually have a high hydrogen content, which is responsible for the strong N-H bond material absorption around the wavelength of 1.55 μm [24]. (N-H bond has a vibrational mode with wavelength around 3 μm and its second harmonic is around 1.55 μm.) Decreasing the dichlorosilane to ammonia ratio (e.g., 0.1 − 1) can reduce the hydrogen content, but the stress of the film increases significantly. Therefore, for our applications, a careful balance between the film thickness and the material absorption has to be found. Also, an optimal post-annealing process has to be developed to further reduce the hydrogen content to minimize the material absorption loss.
As the first step of integrating SiN microresonators into the SOI platform, high-Q SiN microresonators are fabricated using SiN-on-SiO 2 wafers where the SiO 2 layer is thick enough (> 3 μm) to prevent leakage from the SiN layer to the Si substrate. The SiN deposition recipe is optimized using a Tystar Nitride LPCVD tool (DCS : NH 3 = 50 : 140 sccm at a pressure of 165 mT). To pattern the SiN samples, a JEOL JBX-9300FS electron-beam lithography (EBL) system is used with ZEP520A (by Zeon cooperation) as the e-beam resist, which is capable of defining fine features with a relatively good etch resistance. One problem with SiN is that it is an insulating material, and electrons can accumulate at the SiN surface as exposure progresses. This charge-up effect disturbs the EBL writing process, and under certain circumstances, it can become strong enough to cause fracture errors. To solve this problem, a conducting polymer, ESPACER (by Shawa Denko K.K.), is spin-coated on top of ZEP before the electron-beam exposure and is subsequently removed with deionized water before the development of ZEP. Next, the pattern is transferred to the SiN layer using plasma etching with a CF 4 /CHF 3 gas mixture in an Oxford Endpoint reactive ion etching machine. The etching recipe is optimized based on SiN-on-Si samples, whose conducting Si substrate helps reduce the charge-up effect from SiN so good scanning-electron micrographs (SEMs) can be taken to examine the etching quality. Figure 1(a) shows the SEM of the cross section of a 400-nm-thick SiN waveguide structure, and fairly vertical and smooth sidewalls have been achieved. After etching, a 1-μm-thick SiO 2 is deposited on the SiN devices using the plasma-enhanced chemical vapor deposition process. Finally, the optimum annealing process is developed in a Tystar Poly Furnace, and the optimal recipe is found to be 8 hours in an O 2 ambient and 4 hours in a N 2 ambient at 1100 o C [19]. Using the developed fabrication recipe, SiN microresonators are fabricated on wafers with a 400-nm-thick SiN layer deposited on a thermally grown, 4-μm-thick oxide layer. Figure 1(b) shows the optical micrograph of a 60-μm-radius SiN microring with a width of 8 μm (the access waveguide has a width of 1.2 μm and the gap is 700 nm), whose transmission measurements for the TE-polarized light (i.e., electric field predominantly parallel to the device layer) before and after annealing are shown in Fig. 2(a) . Figure 2(a) shows that before annealing, the resonance dips for the wavelength range of 1490 − 1560 nm are much shallower than those outside this range, indicating a strong material absorption at these wavelengths. Such an absorption spectrum is characteristic of the overtone absorption from the N-H bond [24]. Figure 2(a) also shows that after applying the optimized annealing process, the resonance dips become more uniform across the whole wavelength range, indicating that the hydrogen content has been significantly reduced. Figure 2(b) shows the zoom-in figure for one specific resonance (marked in Fig. 2(a)) around the wavelength of 1530 nm. As can be seen, before annealing, the absorption-limited intrinsic Q is around 0.15 million, corresponding to a propagation loss around 235 dB/m; but after annealing, the intrinsic Q dramatically increases to 6 million, corresponding to a propagation loss around 6 dB/m. In the next step, the limiting factor of the measured intrinsic Q is investigated. For our SiN microresonators whose radiation-limited Q is high enough (> 1 billion), there are two major sources of loss: 1) the material absorption loss from the remaining N-H bond after annealing, and 2) the scattering loss resulting from the sidewall roughness caused by fabrication imperfections. To distinguish these two loss effects, we can exploit their different scaling behaviors with the size of the resonator (i.e., R) [8]. To see that, we use Q's definition as [25] where ω is the angular frequency; U c is the energy of the resonant mode (that is proportional to the mode volume); and P loss is the power dissipation rate. Since our microring width is large, the fundamental radial mode does not interact with the inner sidewall, and its mode volume scales with the radius R as R 2 , i.e., U c ∝ R 2 [8]. (The fundamental radial mode is essentially a microdisk mode whose radial expansion increases linearly with R. If the microring width is small, then the radial expansion is limited by the ring width as a constant and its mode volume only scales linearly with R.) The material absorption loss is a volume effect that is proportional to the mode volume, i.e., P loss,abs ∝ R 2 . Therefore, if the resonator is limited by the material absorption loss, its intrinsic Q will not depend much on R. On the other hand, the sidewall scattering loss is a surface effect, indicating it scales linearly with R (assuming the field intensity at the periphery stays the same while scaling), i.e., P loss, scat ∝ R. As a result, if the resonator is limited by the sidewall scattering loss, the intrinsic Q will almost linearly increase with R.
(Rigorous numerical simulation gives slightly lower estimate. For example, the scattering Q at R = 240 μm is 5.54 of that at R = 40 μm.) In Fig. 3, the intrinsic Qs of the fundamental radial mode for several independently fabricated microresonators with radii ranging from 20 μm to 240 μm are shown, where each circle represents one measurement result and the dotted black line is the statistical average. From the almost linear behavior of the average intrinsic Q with R, we conclude that the intrinsic Q of our SiN microresonators is mainly limited by the sidewall scattering loss and not the material absorption loss. In Fig. 4, the transmission spectrum of a 240-μm-radius microring (with a width of 8 μm) is plotted, and different radial mode families are identified by comparing the measured free spectral ranges (FSRs) with the simulation results (see Appendix A for details). The insets to Fig. 4 depict the lineshapes of several resonances belonging to the first-and second-order radial modes, whose intrinsic Qs are measured to be between 17 − 20 million (propagation loss ∼ 2 dB/m). We also observe that the mode splittings of these resonances are varying. For example, for the second-order radial mode, the mode splitting for the resonance around 1567.4 nm is negligible while it is strong for the one around 1568.2 nm. Such variations of mode splitting and intrinsic Q are characteristic of a scattering-loss limited microresonator [26]. In addition, for a microdisk resonator, higher-order radial modes usually have larger FSRs (or smaller group indices) and weaker scattering losses. For the 240-μm-radius microring, however, this is not the case. Starting from the third-order radial mode, the interaction between the resonant mode and the inner sidewall becomes important and increases with the radial mode order. Consequently, these radial modes tend to exhibit smaller FSRs (or larger group indices) and stronger scattering losses (and thus lower intrinsic Qs).
Integrating SiN microresonators into the SOI platform
The second step of this work is to integrate high-Q SiN microresonators into the SOI platform. As illustrated by Fig. 5, we have adopted a vertical integration approach by depositing SiN films on top of the SOI wafers with SiO 2 as buffer. The coupling between the SiN microresonator and the underneath Si waveguide is based on evanescent field overlap and in principle could be lossless [25]. Moreover, the SiO 2 thickness can be varied to adjust the coupling strength between the SiN and Si devices. In Section 2, we showed that high intrinsic Qs can be obtained for the first few radial modes of a wide microring, while higher-order radial modes with lower intrinsic Qs and different wavelength dispersions coexist. For applications such as low-loss delay lines based on over-coupled microresonators [27], a strong coupling to one of the high-Q radial modes is needed while the coupling to the other radial modes has to be suppressed. On the other hand, for applications such as wavelength conversion based on the four-wave mixing process [20], achieving a critical coupling to one of the high-Q radial modes for the pump is preferred (to get strong field enhancement). It is also possible to employ the high-order radial modes with relatively low Qs for the signal and idler (to allow large bandwidth) by engineering the inner sidewall to satisfy the energy and momentum conservation [20]. Therefore, to device designers, it is important to have the capability to achieve a selective and controllable coupling to a specific radial mode. For this purpose, we use the pulley coupling scheme in a vertical coupling architecture in the SiN-on-SOI hybrid platform [27,28].
Vertical coupling based on the pulley coupling scheme
The pulley-coupled structure is illustrated by Fig. 6(a), which consists of a Si waveguide wrapped around the SiN microresonator for a certain coupling length. Using the first-order coupled-mode theory, the amplitude coupling coefficient between the nth (n = 1, 2, ...) radialorder resonant mode (SiN) and the mode of the access waveguide (Si) is given by [25] where i = √ −1; ε 0 is the vacuum permittivity; δV WG denotes the volume of the access waveguide (Si); n Si and n SiO 2 are the refractive indices of Si and SiO 2 (cladding), respectively; E E E n,m (r r r) is the electric field of the nth radial mode (of the SiN microresonator) with an azimuthal order m (the field is normalized so it corresponds to unit energy, i.e., 1/2 ε(r r r)|E E E n,m (r r r)| 2 dV = 1 with ε(r r r) being the permittivity of an isolated microresonator); and E E E WG (r r r) is the electric field Equation (2) can be simplified for the pulley structure by considering the azimuth direction (i.e., φ as shown in Fig. 6(a)) and the transverse plane (i.e., r and z as shown in Fig. 6(a)) separately. As illustrated in Fig. 6(b), we use R + Δr to denote the position of the inner sidewall of the Si waveguide, where R is the radius of the SiN microresonator and w is the width of the Si waveguide. Using the relations E E E n,m (r r r) = E E E n (r, z) exp(imφ ) and E E E WG (r r r) = E E E WG (r, z) exp[iβ WG (R + Δr + w/2)φ ] with β WG being the propagation constant of the waveguide mode, we arrive at where κ n is defined as with δ S WG being the cross section of the Si waveguide; L is the coupling length (L = φ L (R + Δr + w/2) with φ L being the pulley coupling angle as shown in Fig. 6(a)); sinc(x) ≡ sin(x)/x; and δ β n,m ≡ m/(R + Δr + w/2) − β WG . (The dependence of δ β n,m on the radial mode n is implicit, in the sense that around a given wavelength, different radial modes have different m, or, different effective indices.) According to Eq. (3), if we adjust the access waveguide dimension (and thus, β WG ) to make the desired radial mode phase matched (δ β desired,m ≈ 0) and other radial modes phase mismatched, a proper choice of L can result in a suppressed coupling to the undesired radial modes [27]. However, for a large resonator, the difference between the effective indices of different radial modes is small, and the required coupling length can be very large (L should be on the order of 2π/|δ β undesired,m |). In our case, the vertical coupling allows us to place the Si waveguide underneath the SiN microresonator, which offers us an additional degree of freedom to differentiate κ n in Eq. (3) among different radial modes. In Fig. 6(b), the mode profiles of the first four radial modes of a 240-μm-radius SiN microring (width 8 μm) are plotted, and one can intuitively expect that when the Si waveguide is underneath the mode center of the nth radial mode, κ n will reach its maximum. Using Eq. (4), κ n for the first four radial modes (i.e., n = 1 − 4) as a function of the Si waveguide position are numerically calculated, and the results are provided in Fig. 6(c). The Si waveguide dimensions are chosen to be 400 nm × 100 nm so its fundamental TE mode is almost phase-matched to the SiN resonant modes (n eff ≈ 1.6). The oxide thickness is fixed as 500 nm. As observed from Fig. 6(c), for Δr ≈ −2000 nm, κ 1 is at least three times of other κ n (n = 1), implying that even without phase-matching engineering, the power coupling coefficient (|t n,m | 2 ) between the access waveguide and the fundamental radial mode is almost one order of magnitude stronger than that for a different radial mode. On the other hand, when Δr > 0, κ n increases with the radial order n (see the inset of Fig. 6(c)), similar to laterally coupled structures. In such cases, the fundamental mode is difficult to excite using the conventional point coupling scheme, since κ 1 is much smaller than others [9].
Experimental demonstration
The fabrication process to integrate the SiN microresonator into the SOI platform are explained in the following: (a) starting with an SOI wafer with a Si thickness in the range of 80 − 110 nm and a 3-μm-thick buried oxide layer, the Si device layer is first patterned using hydrogen silsesquioxane (HSQ, also called XR-1541, which is a negative resist with similar chemical properties as SiO 2 ) as the EBL resist and then dry etched using a Cl 2 -based plasma [29,30]; (b) flowable oxide (FOx(R)-16 by Dow Corning) is spin-coated on top of SOI, resulting in an almost flat surface (see the SEM shown in Fig. 7(a)) with film thickness around 550 − 750 nm (depending on the spin-coat speed). The FOx film is first cured at 350 o C on a hotplate for 1 hour and then annealed in the oxygen ambient at 800 o C to convert it to SiO 2 (Si-H bond removed) [31], as can be confirmed from the SEM shown in Fig. 7(b); (c) using the LPCVD method, a 400-nm-thick SiN is deposited on top of the SiO 2 buffer (see Fig. 7(b)), and annealing is performed to remove the N-H bond; (d) in order to align the SiN microresonators to the underneath Si waveguides, we have markers made in the Si layer. However, because of the strong charge-up effect from the top SiN and SiO 2 layers, the Si markers are difficult to find under the SEM of the EBL system. To solve this problem, photolithography is employed to open a window (2 mm × 2 mm) on top of the Si markers, which can be exposed by removing the top SiN and SiO 2 layers using a dry etching process (CF 4 ); (e) SiN microresonators are fabricated after alignment to the underneath Si waveguides using the Si markers. The optical micrograph in Fig. 7(c) demonstrates that a good alignment between the SiN microresonator and the underneath Si waveguide has been achieved. As illustrated by Fig. 8(a), the fabricated samples are characterized by coupling light from a tunable laser to the Si waveguide input and collecting the transmitted light at the Si waveguide output. When the swept wavelength of the laser coincides with the resonances of the SiN microresonator, strong scattering light can be observed from the top infrared camera (see Fig. 8(a)), indicating a good coupling between the Si and SiN layers. The transmission spectrum for a 60-μm-radius SiN microring is provided in Fig. 8(b), where only the fundamental radial mode is excited due to the use of the pulley coupling scheme (Si waveguide: 450 nm × 110 nm (intentionally designed to be phase mismatched to the first-order radial mode with δ β 1 ∼ −0.4 rad/μm so a critical coupling can be achieved), Δr ≈ −2000 nm, L ≈ 10 μm, and oxide thickness ≈ 700 nm). The zoom-in figure for the marked resonance in Fig. 8(b) is also provided, suggesting an intrinsic Q of 2 million. Note that this Q value is about 40% of what we can get from a SiN-on-SiO 2 structure. (The statistical average of intrinsic Qs for 60μm-radius microresonators is around 4.5 million, see Fig. 3.) The reduction of the intrinsic Q has also been confirmed by fabricating SiN microresonators on the oxide substrate along with the SiN-on-SOI samples. Processed by identical fabrication steps, the intrinsic Qs of SiN-on-SOI samples are consistently lower (about half) than those obtained for SiN-on-SiO 2 ones. We believe the reduction of the intrinsic Qs is mainly due to the degradation of the SiN material quality deposited on the FOx surface, which is not perfectly flat as can be seen from Fig. 7(a). In the future, a chemical-mechanical polishing (CMP) for the FOx layer should improve the SiN quality and therefore the intrinsic Qs of SiN microresonators. Nevertheless, the results shown in Fig. 8 demonstrate the potential of forming low-loss devices in the SiN-on-SOI platform with considerably better performance measures (e.g., loss, maximum allowed intensity, tuning and reconfiguration features) compared to either the conventional SOI-based structures or the SiN-on-SiO 2 structures.
Conclusion
In summary, we demonstrated here a vertical integration of high-Q SiN microresonators into the SOI platform for applications requiring low-loss performance or a high power handling capability at the telecommunication wavelength range. Using a 400-nm-thick SiN layer, com- pact SiN microresonators have been successfully fabricated; especially, the ultra-high Q of 20 million (propagation loss around 2 dB/m) for a 240-μm-radius microring is, to the best of our knowledge, the largest reported to date for SiN microresonators at such size levels. The integration of SiN microresonators with SOI has been achieved by depositing SiN on top of the SOI devices using SiO 2 as the buffer. We showed the possibility of achieving selective coupling to one of the radial modes (in particular, the high-Q fundamental radial mode) of a wide SiN microring based on the vertical pulley coupling scheme. Microrings with a 60-μm-radius have been demonstrated on such a SiN-on-SOI platform, showing a single-mode operation with an intrinsic Q around 2 million. The decrease of the intrinsic Q compared to those obtained from the SiN-on-SiO 2 samples is believed to arise from the degradation of the SiN material quality deposited on the uneven SiO 2 buffer, whose surface quality could be improved by a CMP process in future to allow for higher intrinsic Qs of SiN microresonators integrated into the SOI platform. In the measured transmission spectrum, we can start with an arbitrary resonance and find its radial mode family using an approximate FSR (∼ 800 pm for 240 μm radius). The FSRs for the picked radial mode family can then be accurately measured from the transmission spectrum. Three such examples for the transmission plotted in Fig. 4 are provided in Fig. 9, where the normalized FSR is defined as FSR×2πR/c = 1/n g , with n g being the group index. The three dotted lines in Fig. 9 are the simulation results. For the 240-μm-radius microring with a width of 8 μm, the second-order radial mode has the largest FSR, and starting from the thirdorder radial mode, the FSRs decrease with the radial mode order due to the impact from the inner sidewall. Comparing the measured FSRs to the simulation results, the radial mode order of each resonance family can be identified. Figure 9 also shows that the scan of the laser is not strictly linear, since the measured FSRs are oscillating around its theoretical value with a relative error around ±0.25%, indicating that there is about ±2 pm error in the resonance wavelength measurement. (For small microresonators, the mode splitting could also contribute some errors in determining the exact resonance wavelength. But here, the mode splitting is on the order of 0.3 pm and could be neglected.) The next question is how accurate the measured intrinsic Qs are since the slowest scan rate (500 pm/s) of the continuous mode is still fast. The tunable laser (Agilent 81682A) offers a wavelength locking mode that allows a narrowband scan around a desired wavelength (starting wavelength) by applying a sweeping low-frequency voltage (-5 V to 5 V). Figure 10(a) shows the transmission measurement using the continuous mode for another 240-μm-radius microring around the wavelength of 1568.6 nm (second-order radial mode), which shows a measured intrinsic Q around 16.7 million. Figure 10(b) shows the narrowband scan using the wavelength locking mode for the same resonance by applying a triangular wave (-5 V to 5 V, 1 kHz frequency, sampled at 100k sample/s; starting wavelength 1568.5 nm), where the forward and backward scans do not overlap due to the hysteresis associated with the piezo tuning. To correct the hysteresis and also to obtain an accurate wavelength range, the same wavelength scan is applied to measure the transmission spectrum of a long waveguide (∼ 10 cm), which is a regular Fabry-Perot (FP) pattern with a period ∼ 10 pm and could be used as reference.
(The FP results from reflections at the cleaved facets. The same-type pattern can be observed in Fig. 4 with a larger period.) After such a calibration, the intrinsic Q is measured to be around 16 million. Therefore, we conclude that the transmission results from the continuous scan are accurate. . The inset shows the lineshape of a resonance around 1568.6 nm. (b) Transmission spectrum measured for the same resonance shown in (a) using the wavelength locking mode of the tunable laser. The applied voltage to the laser is a triangle wave with a frequency of 1 kHz. It is generated by the DAQ board and sampled at 100k sample/s. The blue and red curves depict the forward and backward scans, respectively; the dotted black line is the ideal FP response assuming a linear scan; and the inset shows the lineshape of the resonance around 1568.6 nm from the forward scan. | 7,896 | 2013-07-29T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Relationships of sex hormones with muscle mass and muscle strength in male adolescents at different stages of puberty
This study analysed the associations of sex steroids with fat-free mass (FFM) and handgrip strength in 641 Chinese boys. Serum total testosterone (TT) and oestradiol were measured by chemiluminescence immunoassay. Free testosterone (FT) and oestradiol were calculated. FFM and handgrip strength were measured by bioelectrical impedance analysis and a hand dynamometer, respectively. Generalised additive models and multiple linear regression were used to explore the relationships. A subgroup analysis was conducted in early-mid pubertal and late-post pubertal groups. Age, height, weight, physical activity, intake of dietary protein and/or stage of puberty were adjusted. TT and FT were positively related to FFM and handgrip strength, with a curvilinear relationship being detected for handgrip strength (p<0.050). This curvilinear relationship was only observed in the late-post pubertal group, suggesting a potential threshold effect (FT>11.99ng/dL, β = 1.275, p = 0.039). In the early-mid pubertal group, TT and/or FT were linearly or near-linearly related to FFM or handgrip strength (β = 0.003–0.271, p<0.050). The association between FT and FFM was stronger than that in the late-post pubertal group. This study found that serum T had different associations with muscle parameters in Chinese early-mid pubertal and late-post pubertal boys. In the late-post pubertal boys, serum T was curvilinearly related to muscle strength with a threshold effect and its link with muscle mass was weaker.
Introduction
Adequate muscle mass and muscle strength are essential to maintaining optimal health throughout life [1]. Studies have observed that muscle mass and muscle strength increase rapidly from childhood to puberty and reach a peak in early adult life, then decline with age from approximately the fifth decade [2][3][4]. Adolescents with low muscle strength are more likely to have low muscle strength in adulthood, a fact that suggests the need for concern about the levels of muscle power from a young age [5]. Moreover, in the child and adolescent populations, low muscle mass and muscle strength are found to be related to high risks of cardiovascular and metabolic diseases [1], and these may increase mortality and degrade the quality of life in the middle-aged and elderly population [6]. Inverse associations of muscle mass and muscle strength with metabolic risk factors, i.e., insulin resistance and blood lipids, have been demonstrated in adolescents by several studies [7]. Optimal development of muscle mass and muscle strength during adolescence is crucial not merely for general health, but also for preventing several common disorders, i.e., osteoporosis and sarcopenia, later in life [1]. In addition, optimal muscle growth is related to cognitive development and motor scores early in life [1,8]. Therefore, maximising the gains in muscle mass and muscle strength in early life may be a critical strategy for reducing the risks of some common and increasingly prevalent disorders in later life.
In boys during puberty, sex hormones may have dramatic activating effects for promoting rapid accumulation of muscle mass and the acquisition of muscle strength [1,2]. Testosterone (T) exerts important effects on muscles through various pathways. It can increase muscle mass by promoting myogenic differentiation of multipotent mesenchymal stem cells and by stimulating muscle protein synthesis [9]. Studies have reported that T is related to the accrual of muscle mass and the attainment of optimal muscle mass [10]. Puberty is associated with increased circulating T concentrations in adolescent boys [11]. Some studies in young men and adolescent boys have found that T is vital for the development and maintenance of muscle mass through its ability to stimulate whole-body protein synthesis and inhibit proteolysis, resulting in a net anabolic effect [12]. In addition, although the influence of oestradiol (E 2 ) on muscle mass and muscle strength is less understood in men, the oestrogen receptor has been found to be expressed in male skeletal muscle [13]. Some studies have speculated that E 2 could have an anabolic effect on muscle mass and could improve muscle strength in men and boys [13].
The influence of the serum concentrations of sex hormones on muscle mass and muscle strength in adolescent boys remains unclear due to the limited number of studies [14][15][16]. In adult men, the age-related decline in serum T concentrations has been implicated in reduced muscle mass and muscle strength [17]. T has also been used as a clinical supplement for older men to promote muscle mass gain [18]. Moreover, several studies in adult men have reported that the positive relationship between serum T and muscle strength could be nonlinear. In the Longitudinal Ageing Study Amsterdam, Schaap et al. found that only the concentration of total T (TT) in the 4th quartile group was related to muscle strength in 623 men [19]. The concentrations of sex hormones are among the factors that differ most prominently between the different stages of puberty. The results of these studies could imply that the effect of T on muscle varies between individuals at different stages of puberty. To date, no study has explored the relationship of serum T with muscle mass and muscle strength at different stages of puberty. As for serum E 2 , one study in 12.9-year-old boys and several studies in adult men observed that it was not associated with muscle parameters [14,20]. In contrast, Vandenput et al. found that high serum E 2 concentration was associated with higher lean mass in 3,014 Swedish adult men [21]. Understanding the impacts of sex hormones on muscles in adolescents may be indispensable for the optimal development of muscle mass and muscle strength.
Therefore, in this study, we first explored and investigated the relationships of serum T and E 2 concentrations with muscle mass and muscle strength in 641 Chinese boys aged 11-18 years using generalised additive models as well as multiple linear regression, and further examined the associations between sex hormone concentrations and two muscle parameters at different stages of puberty.
Study population
All participants were students at a secondary school in Jiangmen, China. The volunteers were recruited by posting advertisements and inviting them to attend a health talk presented by the investigators in each class at the school in 2015. The followed exclusion criteria were imposed: 1) a history of disorder or medication that may lead to abnormal sex steroid concentrations or muscle metabolism, i.e., musculoskeletal disorders, taking oestrogen, androgen or growth hormone treatment; 2) psychiatric or behavioural disorders; 3) critical illness, i.e., cancer; 4) contraindications for bioelectrical impedance analysis (BIA), i.e., implanted pacemaker, cardioverter defibrillator or diseases affecting the electrical resistance of the skin. However, no student had to be excluded due to these conditions. We assumed that the correlation coefficients between sex hormone and muscle mass, muscle strength were 0.38 and 0.45 in boys, respectively. The type I error rate was less than 0.05 (α = 0.05), and the power of test was 90% (β = 0.10). The maximum required sample size was 199. Although 697 participants were initially recruited, 56 of them were excluded due to missing data on their sex steroids. Finally, 641 participants were included in this analysis. This study was approved by the Ethics Committee of Sun Yat-sen University, and written informed consent was obtained from all participants and their parents or legal guardians.
Measurement of muscle parameters
The total-body fat-free mass (FFM) of the subjects was measured using BIA (InBody230, Biospace, Seoul, Korea), with 10 impedance measurements at two frequencies (20 and 100 kHz). The subjects were asked to refrain from vigorous exercise for 30 minutes before the test. The in vivo coefficient of variation (CV) for the FFM was 0.3%. All tests were performed by the same investigator, and the operation followed the standard procedures. The handgrip strength was measured by a hand dynamometer (EH101, CAMRY, Guangdong, China). The handgrip strength of every individual was measured for each hand twice, alternating sides between each measurement. The maximum value of handgrip strength was recorded. Due to the difference in palm size among the individuals, the appropriate handle distance was adjusted specifically for each subject before each test to obtain accurate measurement values. The subjects were asked not to swing their arms, bend their arms or stoop during the test. The precision of handgrip strength was 2.6%.
Hormone assays
Approximately 3-mL venous blood samples were collected from all subjects under fasting conditions (12-14-hour fasting) and stored at −80˚C. The concentrations of serum T, E 2 and sex hormone-binding globulin (SHBG) were measured by a fully automated chemiluminescent immunoassay system (Centaur XP, Siemens, Erlang, Germany). The albumin concentration was measured by the bromocresol green assay using a commercial kit (FUJIFILM Wako Pure Chemical Corporation, Osaka, Japan). All samples were analysed in the same laboratory in accordance with the standard procedures. The in vivo CV values for all measurements were within 3.0%. The free T (FT) and free E 2 (FE 2 ) concentrations were calculated using a previously validated equation established by Vermeulen et al. [22] taking account of the concentrations of TT, total E 2 (TE 2 ), SHBG and albumin.
Assessment of covariates
A face-to-face interview was conducted at the school by trained personnel using a structured questionnaire to collect information of the subjects' demographic characteristics, lifestyle habits, history of disease and medications. Height and weight were measured using standardised equipment. Height was measured to 0.1-centimetre (cm) accuracy without shoes. Weight was measured to the nearest 0.1 kg without shoes or any heavy clothing. Body mass index (BMI) was calculated by dividing weight (kg) by height squared (m 2 ). Physical activity was assessed using the modified Chinese version of the Children's Leisure Activities Study Survey questionnaire [23] and expressed as metabolic equivalent (MET�h/d). Intake of dietary protein was estimated using a 3-day 24-hour dietary recall record and the Chinese Food Composition Table. Pubertal development stage was assessed using a Chinese version of the self-reported Pubertal Development Scale (PDS) [24]. Pubertal data were dichotomised as early-mid pubertal vs late-post pubertal according to the PDS data and the reference intervals of TT levels at different pubertal stages in boys [25,26]. The concentrations of fasting plasma glucose and insulin were measured by spectrophotometer (Nanodrop2000, ThermoFisher Scientific, Massachusetts, USA) and fully automated chemiluminescent immunoassay system (Centaur XP, Siemens, Erlang, Germany), respectively. Insulin resistance index was calculated by homeostasis model assessment of insulin resistance (HOMA-IR) as (fasting insulin mU/L) × (fasting glucose mmol/L) / 22.5.
Statistical analysis
Continuous variables were presented as mean and standard deviation (SD), or median and interquartile range. To investigate the association of sex steroids with muscle mass and muscle strength, the generalised additive model (GAM) was first adopted to explore the functional form of the association between the sex steroids and the two muscle parameters in all subjects. The GAM analysis was also conducted in subgroups classified according to different stages of puberty. Furthermore, the specific estimation and inference of the associations between the sex steroids and the muscle parameters were detected with a multiple linear regression analysis. If nonlinearity was detected in the GAM analysis, a two-piecewise multiple linear regression analysis was performed to determine the break-point of the association between the sex hormone and the muscle parameter [27]. Confounding factors were selected based on the previous literatures and the results of univariate analysis. The confounding factors, i.e., age, height, weight, physical activity, intake of dietary protein and stage of puberty (not for the subgroup analysis), were adjusted in all of the models. Besides the stage of puberty, all other confounding factors were included in the model as continuous variables. All statistical analyses were conducted with R software (version 3.5.1, Vienna, Austria) and SPSS (v20.0, Chicago, Illinois, USA). A two-sided p-value of less than 0.05 was considered as statistically significant.
Basic characteristics of the studied population
The basic information of the subjects is shown in Table
PLOS ONE
Sex hormones and muscle mass, muscle strength
Linear regression analysis of sex steroids with muscle parameters
Linear regression analysis was conducted to make the specific estimation and inference of the associations of the TT and FT concentrations with both muscle parameters in the early-mid pubertal group, as well as with the FFM in the late-post pubertal group (Table 2)
Threshold analysis for serum T and handgrip strength in the late-post pubertal group
The relationship between the FT concentration and handgrip strength was found to be curvilinear in the late-post pubertal group according to the GAMs. Therefore, a two-piecewise multiple linear regression analysis was conducted to investigate the potential threshold effects of the FT concentration on handgrip strength. The results showed that the break-point was at 11.99 ng/dL. As shown in Table 3, the handgrip strength showed a significant positive association with the FT concentration in the individuals with FT concentrations above 11.99 ng/dL (β
PLOS ONE
Sex hormones and muscle mass, muscle strength
Discussion
In this study, we investigated the associations of sex steroids with muscle parameters in 641 Chinese male adolescents. The results indicated that the concentrations of TT and FT were positively related to the FFM and handgrip strength, and a curvilinear relationship of the serum T concentration with handgrip strength was detected. Further subgroup analyses found that this curvilinear relationship was only observed in the late-post pubertal group, suggesting a potential threshold effect. In addition, compared with the early-mid pubertal group, the strength of association for the serum T concentration and the FFM became weaker in the latepost pubertal group. This study supported the hypothesis that a high concentration of serum T contributes to the gain of muscle mass and muscle strength in male adolescents, consistent with some previous studies [14,28]. A study of the 'Children of 1997' birth cohort in Hong Kong found that the FT concentration had a positive association with skeletal muscle mass in adolescent boys [14]. Hou et al. found that a higher TT concentration was related to a larger value of the
PLOS ONE
Sex hormones and muscle mass, muscle strength skeletal muscle index in 278 boys aged 15 years [28]. Ramos et al. reported that the TT concentration was positively correlated with the strength of knee extensors in 11-18-year-old boys [16]. Evidence indicates that the acquisition of muscle mass and muscle strength takes place largely during puberty and is promoted by sex steroids [29,30]. Both animal and human studies have demonstrated that the androgen receptors (ARs) are highly expressed in muscles and that the androgen-AR pathway is essential for increases in muscle strength and muscle mass [31]. Dramatic hormonal fluctuations in puberty are accompanied by marked changes in body composition, e.g. lean mass [29]. During puberty, TT production increases from approximately 0.3 mg/d to 7 mg/d with an increase in circulating T concentrations [32]. This may stimulate the increases in the numbers and sizes of muscle fibres and the numbers of muscle satellite cells and myonuclei [9]. Evidence also demonstrates that T can promote mitochondrial biogenesis and myoglobin expression [9], which might improve the muscle strength.
The results of the present subgroup analysis suggested that T contributed to the muscle strength but not the muscle mass in the late-post pubertal boys. In the early-mid pubertal group, however, the concentration of FT, as the most biologically active fraction, was positively associated with both the FFM and handgrip strength. These findings might imply that the links between T and muscle mass might become weaker in the late-post pubertal boys. A study including 777 Chinese male subjects aged 5-19 years observed a sharp increase in lean mass until the age of 14 years, after which the slope of the increase in lean mass became less steep and flattened out at the age of 16-19 years [33]. Some other studies have also reported similar findings. Veldhuis et al. found that the FFM was stable by 17-19 years in Caucasian boys [2]. For muscle strength, however, the findings of a cohort study indicated that it continuously increased in young men between 19 and 21 years of age [34]. Other studies have also found that the handgrip strength may increase continuously to a peak in early adult life around 30 years, and remain at the peak value through to midlife [3]. These findings suggest that the increases in muscle mass and muscle strength do not occur in parallel in the late-post pubertal boys. Furthermore, a study of the MrOS Hong Kong cohort comprising 1,489 adult men found that the effect of T on muscle strength might be independent of muscle mass [17]. Some studies have reported that other factors, e.g. neuromuscular adaptations, besides the increase in muscle mass could account for part of the increase in muscle strength during adolescence [35]. Therefore, the contribution of T to the increase in muscle strength might be less closely related to muscle mass in the late-post pubertal boys.
Our data also revealed that the relationship between the serum T concentration and muscle strength was curvilinear in the late-post pubertal group. To the best of our knowledge, no previous study has reported a nonlinear relationship between serum T and muscle parameters in adolescents. In this study, only for the late-post pubertal boys in the 87th percentile of FT concentration (above 11.99 ng/dL), their FT concentration had an association with the handgrip strength. This FT concentration corresponds to a total T concentration of approximately 708.00 ng/dL. This finding might imply that only those very few late-post pubertal adolescent boys with very high T concentrations could reap the benefits of T on muscle strength. Exercise is the general adopted method for substantial gains in muscle strength. Studies have found that men with high concentrations of serum T and high rates of exercise had the greatest increase in muscle strength [36]. Moreover, other studies have found that after exercise, the expression of ARs can be upregulated by muscle contraction, which might enhance T uptake to the muscle and potentially exert anabolic effects [37]. These findings could support the interactive effect of T and exercise on muscle strength. In this study, high serum T concentrations had a linear relationship with muscle strength in the early-mid pubertal boys, but among the latepost pubertal boys, serum T only contributed to muscle strength for those very few boys in the 87th percentile of FT concentration. These results might suggest that exercise intervention in the early-mid pubertal boys might have greater effects on muscle strength than in the late-post pubertal boys for most individuals.
This study investigated the relationships of the serum T concentration with muscle parameters in a larger sample size compared with previous studies, and revealed the different contributions of serum T to muscle mass and muscle strength between early-mid pubertal and latepost pubertal boys for the first time. This study also has some limitations. First, the FFM was measured using BIA, rather than magnetic resonance imaging (MRI), which is a gold standard method for measuring muscle mass. MRI has high accuracy and ability to differentiate between tissue types. However, the apparatus is not portable and requires highly specialized personnel, and the cost is high. These limit its use in a large-scale epidemiological study in normal adolescents [38]. BIA has been validated by MRI in this aspect, and the results indicated it as a good choice for a portable method of muscle mass measurement for Chinese. The concordance coefficient between BIA and MRI for the FFM is satisfactory (0.85) [39]. Second, although mass spectrometry is often taken as the gold standard method to measure sex steroid concentrations, this study used automatic immune analysers. Nonetheless, these are sufficiently reliable for epidemiological studies in general populations and are also widely used in clinical and reference laboratories [40]. Third, the use of handgrip strength test only cannot give a comprehensive assessment of muscle strength. For example, the hand grip strength may not be a good measure for the strength of lower extremity. Nonetheless, the hand grip strength is a reliable and simple surrogate of upper extremity strength, and some studies found that it correlates well with other muscle strength tests such as the knee extension strength [41,42]. Fourth, the clinical assessment with Tanner staging is widely recognized as the gold standard method for assessing pubertal development [43]. However, the privacy concerns make it difficult to obtain consent from the adolescents and their parents. PDS may be a useful and reliable alternative method for assessing puberty in adolescents, which is a low-burden self-reporting method without the physical examination [44]. Evidences supported that PDS had strong internal consistency and reliability (Cronbach's alpha: 0.83), and substantial correlation (0.82) with Tanner staging by clinical assessment in boys [45]. Discrepancies could exist between the self-reported and the actual pubertal stage in this study. Nonetheless, in order to increase the accuracy, besides the self-reported PDS data, the information of the serum testosterone levels was also adopted for the classification of pubertal stages according to the reported reference intervals at different pubertal stages. Finally, the changes in the FFM and handgrip strength over time were not available in this cross-sectional study. Further longitudinal studies are warranted for investigating the relationships of sex steroids with the changes in muscle mass and muscle strength in adolescent boys.
In conclusion, this study indicated that the serum T concentration as a positive predictor had different associations with muscle mass and muscle strength depending on the pubertal development stage of Chinese male adolescents. In the late-post pubertal boys, the links between serum T concentration and muscle mass became weaker compared with that in the early-mid pubertal boys, and a curvilinear relationship between the T concentration and muscle strength was found with a threshold effect. These findings are expected to improve our understanding of the effects of sex steroids on muscle mass and muscle strength accrual during male adolescence and to provide useful information for the establishment of a prediction model for the peak value of muscle mass and muscle strength. | 5,056.8 | 2021-12-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
An instrumental variable random-coefficients model for binary outcomes
In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration.
INTRODUCTION
In this paper, we analyse a random-coefficients model for a binary outcome, where β ≡ (β 0 , β 1 , β 2 ) are random coefficients. Although covariates W are restricted to be exogenous, covariates X are permitted to be endogenous in the sense that the joint distribution of X and random coefficients β is not restricted. We assume that in addition to the variables (Y, X, W ), the researcher observes realizations of a random vector of instrumental variables Z such that (W, Z) and β are independently distributed. Thus, our goal is to use knowledge of the joint distribution of (Y, X, W, Z) to set identify the marginal distribution of the random Correction Note: This article was first published online on the 2nd of September 2013, under a subscription publication licence. The article has since been made OnlineOpen, and the copyright line and licence statement was therefore updated in June 2014.
S2
A. Chesher and A. M. Rosen coefficients β, denoted F β , with the joint distribution of random vectors X and β left unrestricted. As a special case, we also allow for the possibility there are no exogenous regressors W . 1 As shorthand, we use the notationZ ≡ (W, Z) to denote the composite vector of all exogenous variables.
In order to characterize the identified set for F β , we carry out our identification analysis along the lines of , hereafter CRS, and . Like CRS, we consider a single-equation model for a discrete outcome, but here we restrict the outcome to be binary. However, the model (1.1) used in this paper features random coefficients, which are not present in CRS. The model is a special case of the general class of models considered in , where we provide identification analysis for a broad class of instrumental variable (IV) models. Like those models, the random-coefficients model (1.1) allows for multiple sources of unobserved heterogeneity whereas, traditionally, IV methods have been employed in models admitting a single source of unobserved heterogeneity. Thus, in this paper, we investigate, and illustrate by way of example, the identifying power of IV restrictions with multivariate unobserved heterogeneity in the determination of a binary outcome. The characterizations we employ rely on results from random set theory. These and related results have been used for identification analysis in various ways and in a variety of contexts by Beresteanu et al. (2011Beresteanu et al. ( , 2012, Galichon and Henry (2011), CRS, and Chesher and Rosen (2012, 2013. As in CRS and Rosen (2012, 2013), our characterizations make use of properties of conditional distributions of certain random sets in the space of unobserved heterogeneity.
The model also builds on the IV models for binary outcomes considered in Chesher (2010Chesher ( , 2013, where a single source of unobserved heterogeneity was permitted. There, it was found that even if parametric restrictions were brought to bear, the models were in general not point identifying. So, with the addition of further sources of unobserved heterogeneity, point identification should not generally be expected. The paper thus serves to illustrate in part the effect of additional sources of heterogeneity from the perspective of identification. The case of a binary outcome variable is convenient for illustration, but models that permit more variation in outcome variables might achieve greater identifying power. Binary response specifications that model β in (1.1) as a random vector include, for example, those of Quandt (1966) and McFadden (1976), and can be viewed as special cases of the discrete choice models of Hausman and Wise (1978) and Lerman and Manski (1981). These papers focus on specifications where all covariates and β are independently distributed, and where the distribution of β is parametrically specified, enabling estimation via maximum likelihood. Ichimura and Thompson (1998) and Gautier and Kitamura (2013) focus on the binary outcome model (1.1), again with covariates and random coefficients independently distributed, but with F β non-parametrically specified. Ichimura and Thompson (1998) provide sufficient conditions for point identification of F β in this case, and prove that F β can be consistently estimated via non-parametric maximum likelihood. Gautier and Kitamura (2013) introduce a computationally simple estimator for the density of β, and derive its rate of convergence and pointwise asymptotic normality. Gautier and LePennec (2011) propose an adaptive estimation method.
In contrast, we do not require that X || β and we employ instrumental variables Z. The use of an IV approach in a random-coefficients binary response model with endogeneity is new. A control function approach is employed by Hoderlein (2009) to provide identification results for marginal effects and local average structural derivatives when a triangular structure S3 is assumed for the determination of X as a function of Z. Hoderlein and Sherman (2011) study identification and estimation of a trimmed mean of random coefficient β when again endogenous variables can be written as a function of mutually independent instruments Z and control variables V , additionally employing some conditional median restrictions. However, our model does not require one to specify the form of the stochastic relation between X and Z, and is thus incomplete. 2 The random-coefficients logit model of Berry et al. (1995), hereafter BLP, now a bedrock of the empirical industrial organization literature, allows for endogeneity of prices using insight from Berry (1994) to handle endogeneity. Yet, the endogeneity problem in that and related models in industrial organization is fundamentally different from the one in this paper. Their approach deals with correlation between alternative-specific unobservables with prices at the market level, both of which are assumed independent of random coefficients that allow for consumer-specific heterogeneity. Important identification results in such models are provided by Berry andHaile (2009, 2010), and a general treatment of the literature on such models and their relation to other models of demand is given by Nevo (2011). Here, we focus on binary response models at a micro-level, rather than across separate markets, absent alternative-specific unobservables, and we allow random coefficients to be correlated with regressors. 3 Recent papers that give identification results for micro-level discrete choice models with exogenous covariates and high-dimensional unobserved heterogeneity include Briesch et al. (2010), Bajari et al. (2012), and Fox and Gandhi (2012). The latter also allows for endogeneity with alternativespecific special regressors and further structure on the determination of endogenous regressors as a function of the instruments.
The paper is organized as follows. In Section 2, we formally present our model and key restrictions, and we introduce a simple example in which there is one endogenous regressor and no exogenous regressors. In Section 3, we characterize the identified set for the distribution of random coefficients in the general model set out in Section 2, and we provide two further examples. In Section 4, we provide numerical illustrations of identified sets for subsets of parameters in a parametric version of our model for four different data-generation processes. We conclude in Section 5. The proof of the main identification result, which adapts theorems from CRS, is provided in Appendix 1. Appendix B provides computational details absent from the main text, and Appendix C verifies that there would be point identification in the example considered in the numerical illustrations of Section 4 if exogeneity restrictions were imposed.
Throughout the paper, we use the following notation. We use upper-case Roman letters A to denote random variables and lower-case letters a to denote particular realizations. For the probability measure P, P (·|a) is used to denote the conditional probability measure given A = a. The calligraphic font A is used to denote the support of A for any well-defined random variable A in our model. B denotes the support of the random-coefficients vector β, and S denotes a closed set on B. For any pair of random vectors A 1 , A 2 , A 1 ⊥ A 2 denotes stochastic independence, Supp(A 1 , A 2 , . . . , A n ) denotes the joint support of the collection of random vectors A 1 , A 2 , . . . , A n , and Supp(A 1 , A 2 , . . . , A n |b 1 , . . . , b m ) denotes the conditional support of (A 1 , A 2 , . . . , A n ) given realizations (b 1 , . . . , b m ) of random vectors (B 1 , . . . , B m ). The empty set is denoted by ∅. We use F β to denote the probability distribution of β, mapping from subsets of B to the unit interval. F is used to denote the admissible parameter space for F β , F is used to denote a generic element of F, and F * denotes the identified set for F β . We use cl(A) to denote the closure of a set A. Finally,Z ≡ (W, Z) with support denoted (Z) is used to denote the vector of all exogenous variables, andz = (w, z) for particular realizations.
THE MODEL
We now formally set out the restrictions of our model.
belong to a probability space ( , , P) endowed with the Borel sets on and the joint distribution RESTRICTION 2.2. For any (w, x, z) on the support of (W, X, Z), the conditional distribution of random vector β given W = w, X = x, and Z = z is absolutely continuous with respect to Lebesgue measure on B. β is marginally distributed according to the probability measure F β mapping from subsets of B to the unit interval, with associated density f β . F β is known to belong to some class of probability measures F. 4 RESTRICTION 2.3. (W, Z) and β are independently distributed.
Restriction 2.1 invokes the random-coefficients model for the binary outcome Y and defines the support of random vectors X, W , and Z. The restriction further requires that for all (x, w, z), both Y = 1 and Y = 0 have positive probability P (·|x, w, z). This simplifies the exposition of some of the developments that follow, but is not essential. We do not otherwise restrict the joint support of (W, X, Y, Z). We require that the joint distribution of (W, X, Y, Z) is identified, as would be the case under random sampling, for instance. Restriction 2.3 is our IV restriction, requiring independence of (W, Z) and β. Restriction 2.2 restricts F β to some known class of distribution functions. In principle, this class could be parametrically, semi-parametrically, or non-parametrically specified. Of course, greater identifying power will be afforded when F is parametrically specified. In our numerical illustrations in Section 4, β is restricted to be normally distributed, which is a common restriction in random-coefficients models.
As is always the case in models of binary response, it will be prudent to impose a scale normalization becausexβ > 0 holds if and only if c ·xβ > 0 for all scalars c > 0, wherex ≡ (1, x, w). 5 This can be done by imposing, for example, that B = b ∈ R k : b = 1 if F is non-parametrically specified, or by imposing that the first component of β has unit variance (e.g., when F is parametrically specified as in the following example, and as also employed in the numerical illustrations of Section 4).
S5
EXAMPLE 2.1 (ONE ENDOGENOUS VARIABLE, NO EXOGENOUS VARIABLES). Suppose X ∈ R and that there are no exogenous covariates W . Then, we can write (1.1) as with β = (β 0 , β 1 ) . Suppose that F is the class of bivariate normal distributions whose first component has unit variance. Then, defining α 0 , α 1 as the means of β 0 , β 1 , respectively, we have the representation where U 0 ≡ β 0 − α 0 and U 1 ≡ β 1 − α 1 are mean-zero bivariate normally distributed with the same variance as β = (β 0 , β 1 ). We then have from Restriction 2.3 that U || Z, and we can parametrize the distribution U ≡ (U 0 , U 1 ) as Knowledge of the parameter vector (α 0 , α 1 , γ 0 , γ 1 ) would then suffice for the determination of F β , so the identified set for F β can be succinctly expressed as the identified set for (α 0 , α 1 , γ 0 , γ 1 ).
IDENTIFICATION
For identification analysis, it will be useful to consider the correspondence which is the closure of the halfspace of B on which 2y − 1 and b 0 + xb 1 + wb 2 have the same sign. Application of this correspondence to random elements (W, X, Y ) yields a random closed set T (W, X, Y ). For any realization of the exogenous variablesz ∈Z ≡ Supp(W, Z), the conditional distribution of this random set givenZ =z is completely determined by the distribution of (W, X, Y ) givenZ =z, which is identified given knowledge of F 0 W XY Z under Restriction 2.1. The identified set for F β , denoted F * , is then the set of measures F ∈ F that are selectionable from the conditional distribution of T (W, X, Y ) givenZ =z for almost everỹ z ∈Z. 6 Intuitively, this holds because selectionability guarantees the existence of a random variableβ realized on ( , , P) and distributed F , such that P(β ∈ T (W, X, Y ) |z) = 1, a.e. z ∈Z. 7 Thus, there exists a random variableβ distributed F that delivers the conditional distribution F 0 XW Y |Z (·|z) , a.e.z ∈Z, and all such F are observationally equivalent.
S6
A. Chesher and A. M. Rosen As done in CRS for utility-maximizing discrete choice models without random coefficients and in for single-equation IV models more generally, we can exploit Artstein's Inequality (Artstein, 1983, see also Norberg, 1992, andMolchanov, 2005, Section 1.4.8.) to characterize the identified set through the use of conditional containment functional inequalities. Using the same steps taken in Theorem 1 of CRS, Artstein's Inequality guarantees that a distribution F is selectionable from the conditional distribution of T (W, X, Y ) givenZ = z, if and only if for all closed sets S ⊆ B, (3. 2) The use of the conditional containment inequality (3.2) reduces the problem of determining which F are selectionable from T (W, X, Y ) to a collection of conditional moment inequalities. In CRS and Rosen (2012, 2013), we devised algorithms to determine which test sets S are sufficient in the contexts of the models in those papers to imply (3.2) for all possible test sets S. The collection of such sets, referred to as core-determining sets, is crucially dependent on the support of the random set under consideration. By the same reasoning as in those papers, it is sufficient to focus on test sets that are unions of sets that belong to the support of T (W, X, Y ) conditional on the realization of exogenous variables (W, Z). For any such realization (w, z), the support of T (W, X, Y ) is the collection of sets We do not require that the conditional support of X given (w, z) coincide with its unconditional support, but in that case Supp(X|w, z) in (3.3) can be replaced with X , and the collection of sets T (w, z) does not vary with (w, z). The larger the conditional support Supp(X|w, z), the larger the core-determining collection of test sets will be. Given any (w, z), each element of T (w, z) is a halfspace in B, so the required test sets S take the form of unions of such halfspaces. 8 Alternatively, each such test set can be written as the complement of intersections of sets, each of which are complements of elements of T (w, z). This is convenient because the complement of each T ∈ T (w, z), denoted T c , is also a halfspace, and the intersection of halfspaces is a convex polytope. Thus, the collection of core-determining test sets S contains sets that are complements of intersections of halfspaces, equivalently complements of convex polytopes. The formal result follows.
which is the collection of sets that are complements of those in T (w, z).
The theorem follows from consideration of Theorems 1 and 2 of CRS, adapted to the random set T (W, X, Y ) defined in (3.1), which make use of Artstein's Inequality (Artstein, 1983) to prove sharpness; see also Norberg (1992) and Molchanov (2005), Section 1.4.8. The characterization of test sets for the containment functional characterization (3.4) of Theorem 2 in CRS stipulates that a core-determining collection of test sets S is given by those that are (i) unions of elements of T(w, z), and (ii) such that the union of the interiors of component sets is a connected set. In this paper, condition (ii) can be ignored because the sets T (w, x, y) and T (w , x , y ) are all halfspaces through the origin, ensuring that The test set B can indeed be safely discarded from consideration because from F (B) = 1, (3.4) is trivially satisfied. The equivalence of the containment functional characterization (3.4) and the capacity functional characterization (3.5) follow from the fact that, for any sets T , S, the events T ⊆ S and T ∩ S c = ∅ are identical.
Theorem 3.1 provides a characterization of the identified set of distributions of random coefficients for binary choice models with endogeneity and instrumental variables. In particular, the representation is given by a collection of conditional moment inequalities, with one such inequality conditional on the realization of exogenous variables (w, z) for each element of T ∪ (w, z) in (3.4), equivalently one conditional moment inequality for each element of T ∩ (w, z) in (3.5). These conditional moment inequalities can then be used as a basis for estimation and inference. To illustrate, suppose that the endogenous variable X is discrete, so that for any (w, z), T (w, z) is a finite collection of sets in B. We can therefore enumerate the elements of T ∪ (w, z) as S 1 , . . . , S J for some J < ∞. Suppose further that F * is parametrically specified up to finite-dimensional parameter θ , with typical element F (·|θ ) ∈ F * . The characterization of the identified set in (3.4) can then be written as those F (·|θ ) ∈ F * such that Inference can then be based on these conditional moment inequalities using, for example, methods from Andrews and Shi (2013) or Chernozhukov et al. (2013).
In some important special cases, considered in the following examples, characterization of the identified set can be further simplified. EXAMPLE 3.1. (NO ENDOGENOUS COVARIATES). A leading and well-studied example is the case where there are no endogenous variables X. Then, for each (w, z), we have where b is of the form b = (b 0 , b 2 ) . The intersection of these sets is {b ∈ B : b 0 + wb 2 = 0}, which has zero measure F β under Restriction 2.2, and their union is B, which has measure 1. It S8 A. Chesher and A. M. Rosen follows from similar reasoning as in Theorem 6 of Chesher and Rosen (2012) that for any (w, z) the inequalities of the characterizations of Theorem 3.1 produce moment equalities. Consider, for example, the containment functional inequalities of (3.4) delivered by all S ∈ T ∪ (w, z): The last inequality is trivially satisfied for all F ∈ F. Both the right-hand sides and the left-hand sides of the first two inequalities clearly sum to 1, implying that these inequalities must, in fact, hold with equality, giving ( 3.7) When there are no excluded exogenous variables z and F β is not restricted to a parametric family, these equations coincide with the identifying equations in Ichimura and Thompson (1998) and Gautier and Kitamura (2013). Ichimura and Thompson (1998) provide sufficient conditions for point identification. 9 When F is parametrically restricted, these equalities are likelihood contributions (e.g., integrals with respect to the normal density in Hausman andWise, 1978 or Lerman andManski, 1981), and less stringent conditions are required for point identification. In the absence of sufficient conditions for point identification, the moment equalities (3.6) and (3.7) a.e. (W, Z) nonetheless fully characterize the identified set.
EXAMPLE 3.2. (ONE ENDOGENOUS COVARIATE WITH ARBITRARY EXOGENOUS COVARIATES). Consider the common setting where there is a single endogenous explanatory variable, X ∈ R, as well as some exogenous explanatory variables W , a random k w -vector. Then, given any (w, z), the collection of sets T (w, z) is given by Suppose, for simplicity, that Supp(X|w, z) is discrete. Consider now a test set S which is one of the core-determining sets in T ∪ (w, z) and hence an arbitrary union of sets in T (w, z). 10 Any such S can be equivalently written as the set of b = b 0 , b 1 , b 2 ∈ B that satisfy one of the inequalities for some collections of values X 0 , X 1 ⊆ Supp(X|w, z). 9 The restrictions used to ensure point identification include the requirements that for some fixed c ∈ R kw , F β b : c b > 0 = 1, and that the distribution of W has an absolutely continuous component with everywhere positive density. Our characterizations of the identified set, given by (3.6) and (3.7) in the case of only exogenous covariates, do not require these restrictions. 10 The restriction to cases where Supp(X|w, z) is discrete is not essential but simplifies the exposition. An identical characterization of required test sets S can be shown more generally by referring back to (3.2) appearing in (3.4) and making use of the absolute continuity of F β from Restriction 2.2.
S9
Define now for each j = 0, 1, while if b 1 < 0, the inequalities can be written Furthermore, for any b ∈ B with b 1 ≥ 0, (3.10) implies (3.9), and for any b ∈ B with b 1 < 0, (3.9) implies (3.10). Thus, for any b ∈ B, (3.8) holds if and only if From this, it follows that one need only consider for each (w, z) test sets S of the form where x 2 ≥ x 1 and x 2 ≥ x 1 .
EXAMPLE 2.1. (CONTINUED). If we restrict attention to cases with no exogenous covariates W , there is in fact further simplification of the list of core-determining sets. To see why, note that in this case the collection T (w, z) = T (z) for any z reduces to Each element of T (z) is thus a halfspace in R 2 defined by a separating hyperplane through the origin intersected with B. The union of an arbitrary number of such halfspaces can be equivalently written as the union of no more than two such halfspaces. Therefore, the collection of core-determining sets T ∪ (w, z) = T ∪ (z) is given by the collection of test sets that can be written as either elements of T (z) or unions of a pair of elements in T (z), where for any x ∈ X and y ∈ {0, 1}, The characterization applies for either continuous or discrete X, but if X is discrete with K points of support, there are no more than 2K 2 sets in T ∪ (z) for any z ∈ Z. This follows from noting there are 2K unique (x, y) pairs and the number of all pairwise unions (including the union of each set with itself) is (2K) 2 /2, with division by two from the observation that for any (x 1 , y 1 ) and (x 2 , y 2 ), T (x 1 , y 1 ) ∪ T (x 2 , y 2 ) = T (x 2 , y 2 ) ∪ T (x 1 , y 1 ).
In the numerical illustrations that follow we consider various instances of Example 2.1, where there are no exogenous covariates W and where F is restricted to a parametric (specifically Gaussian) family. In the illustrations, we investigate identified sets for averages of (β 0 , β 1 ), and we show that this affords further computational simplification, in the sense that for any fixed candidate values of (Eβ 0 , Eβ 1 ), we need only consider test sets S that are unions of two elements of T (w, z) in order to check whether such candidate values belong to the identified set. S10 A. Chesher and A. M. Rosen as the probability that U belongs to the set U where θ = (α 0 , α 1 , γ 0 , γ 1 ) and when β is distributed F β with mean α and variance governed by parameters (γ 0 , γ 1 ). Given the restriction that β = (β 0 , β 1 ) is bivariate normally distributed, knowledge of θ implies knowledge of F β . Thus, we consider the identified set for θ , denoted * , and focus attention on the identified set for (α 0 , α 1 ), the projection of the first two elements of * on R 2 .
In two cases (N1 and N2), the parameters are set such that X is endogenous, and in another two cases (X1 and X2), they are set such that X is exogenous. We consider two possibilities for the coefficient δ 1 multiplying instrument Z in the determination of X in (4.1): δ 1 = 1 (N1 and X1) and δ 1 = 1.5 (N2 and X2). All parameter settings are shown in Table 1. Table 2 shows the If the exogeneity restriction X || β is imposed then, as shown in Appendix C, the resulting model point identifies the full parameter vector θ . In the structures delivering probability distributions in cases X1 and X2, it is the case that X || β holds. However, we calculate identified sets for a model without the exogeneity restriction and thereby show the substantial loss in identifying power arising when exogeneity cannot be assumed to hold.
Calculation of probabilities
To illustrate identified sets, we computed the conditional probabilities P [X = x k |z] and P [Y = 0 ∧ X = x k |z]. P [X = x k |z] is given by where (·) denotes the standard normal distribution function and λ ≡ δ 2 2 + 2δ 2 δ 3 γ 0 + δ 2 3 γ 1 + γ 2 0 + δ 2 4 . (4.2) The conditional probability P [Y = 0 ∧ X = x k |z] can be calculated as the difference between two normal orthant probabilities because, when Z = z, we have from which we see that P [Y = 0 ∧ X = x k |z] is indeed the difference between two normal orthant probabilities. The conditional probability P [Y = 1 ∧ X = x k |z] can then be obtained by subtracting P [Y = 0 ∧ X = x k |z] from P [X = x k |z]. 11
Calculation of projections
We calculate two-dimensional projections of the four-dimensional (4D) identified set for θ 0 , giving results for the projection on to the plane on which lie (α 0 , α 1 ). This is the identified set for the mean of the random coefficients (β 0 , β 1 ). We calculate the projections as follows. 12 The full 4D identified set is * = θ ∈ : where S = T ∪ (z) is a collection of 32 core-determining sets of the form described for Example 2.1 in Section 3, specifically (3.11), in the present case where X has four points of support. G U (S, θ) is the probability mass placed on the set S by a bivariate normal distribution with parameters θ . The probabilities P [T (X, Y ) ⊂ S|Z = z], z ∈ Z, are identified under Restriction 2.1. For computational purposes, we make use of the following discrepancy measure which can be used to characterize the full 4D identified set as follows: To compute identified sets for subvectors of parameters, let θ c denote a list of one or more elements of θ , and let θ −c denote the remaining elements of θ . The projection of the identified set on to the space in which θ c resides is the set of values of θ c for which there exists θ −c such that θ = (θ c , θ −c ) lies in the identified set * . We calculate this set, * c , as the set of values θ c for which the value of min θ −c D(θ c , θ −c ) is non-positive: * c = θ c : min (4.5) Here, D(θ c , θ −c ) is to be understood as the function defined in (4.4) applied to that value of θ with subvectors equal to θ c and θ −c . We perform this minimization using the optim function in base R. Figure 1 shows the projections of the identified set in cases N1 and N2 in which X is endogenously determined. The probability generating value (α 0 , α 1 ) = (0, −1) is plotted. When the parameter δ 1 = 1.5 (drawn in beige, labelled Case N2), the area of the projection is smaller than when δ 1 = 1.0 (drawn in blue, labelled Case N1). Most values in the projection when δ 1 = 1.5 lie inside the projection obtained when δ 1 = 1.0, but at high values of α 0 there is a very small region of the first projection that is not contained in the latter. Note that this can happen because even though the slope coefficient on Z in (4.1) is larger in the δ 1 = 1.5 case, this does not guarantee that the quantity max z∈Z P [T (W, X, Y ) ⊂ S|z] providing the lower bound of the inequalities in (4.3) is larger than in the δ 1 = 1.0 case. Figure 2 similarly illustrates projections of the identified set for cases X1 and X2 in which X is exogenously determined in the probability generating process. In this case, the projection of the identified set when δ 1 = 1.5 is a subset of that when δ 1 = 1.0. The identified sets are larger in the exogenous X cases, even though the predictive power of the instrument is the same as in the endogenous X cases. This occurs because the scale on which (α 0 , α 1 ) is measured differs in the two cases. 13 Computations for both figures were implemented as described in Appendix B, with the alphahull parameter set to 5.
In all cases, the projections contain no positive values of α 1 , so the model allows one to sign α 1 and the hypothesis H 0 : α 1 ≥ 0 is falsifiable.
CONCLUSION
In this paper, we have provided set identification analysis for a model of binary response featuring random coefficients and potentially endogenous regressors. The regressors in question are not 13 The scale difference arises because of the differential variability of the index U 0 + U 1 X in (2.1) as measured by the conditional variance given X and Z. Calculations using simulated values of the unobservables show that this is larger at every value of X and Z in the exogenous X case. restricted to be distributed independently of the random coefficients. We have shown that with an IV restriction we can apply analysis along the lines of that in CRS and to characterize the identified set as those distributions that satisfy a collection of conditional moment inequalities. In our numerical illustrations of Section 4, there are 32 such inequalities, one for each core-determining set, which hold conditional on any value of the instrument. While our focus was on identification, recently developed approaches for estimation and inference based on such characterizations, such as those of Andrews and Shi (2013) and Chernozhukov et al. (2013), are applicable. In some settings, the number of core-determining sets in the full characterization can be quite large, necessitating some care in choosing the number to employ in small samples. Issues that arise as a result of many moment inequalities have been investigated in an asymptotic paradigm by Menzel (2009). With discrete endogenous variables having finite support, the number of conditional moment inequalities can be large, but is necessarily finite, and future research on finite sample approximations for inference and computational issues is warranted.
We have provided numerical illustrations of identified sets under particular data-generation processes. We have given an overview of the computational approach we used for computing these identified sets, and details are set out in Appendix B.
Although our computational approaches are adequate for the examples considered, we have no doubt that they can be improved, either by developing more efficient implementations, or by devising new computational approaches altogether. Nonetheless, the illustrations serve to demonstrate the feasibility of computing identified sets in one particular setting in the general class of IV models studied in . These IV models can admit high-dimensional unobserved heterogeneity, for example through a random-coefficients specification such as the one studied in this paper. APPENDIX A: PROOF OF THEOREM 3.1 Proof of Theorem 3.1: Following the same steps as in the proof of Theorem 1 of CRS applied to the random set T (W, X, Y ) and exogenous variablesZ = (W, Z) in place of T v (Y, X; u) and instruments Z in the notation of that paper, we obtain where F (B) denotes all closed subsets of B. Then, the application of Theorem 2 of CRS, specifically part (i), further gives that F (B) above can be replaced with unions of members of the support of T (W, X, Y ). Then, using the same reasoning as in Lemma 1 of Chesher and Rosen (2012), it follows that when considering probabilities conditional on (W, Z) = (w, z), F (B) can be replaced by unions of elements of the conditional support of T (W, X, Y ) given the realization of the exogenous variables, namely T ∪ (w, z). The representation follows from the equivalence that for all S ⊆ B, F (S c ) = 1 − F (S), and for allz ∈Z,
APPENDIX B: COMPUTATIONAL DETAILS
In this appendix, we provide computational details for the numerical illustrations of Section 4 not provided in the main text.
B.1. Calculation of probabilities G U (S, θ)
Each set S in the collection T ∪ (z) = T ∪ is the union of one or more contiguous cones centred at the point (α 0 , α 1 ), which we refer to as elementary cones. The slopes of the rays defining the cones are determined entirely by the values of the points of support of X. In the case K = 4, there are eight such cones. For each value of θ = (α 0 , α 1 , γ 0 , γ 1 ) encountered, we calculate the probability mass supported on each of the eight cones by a bivariate normal density function with mean (0, 0) and variance matrix entirely determined by (γ 0 , γ 1 ). The probability mass supported by a particular set S at the value of θ is obtained by adding the masses on the appropriate cones. Thus, we are able to compute the probability mass G U (S, θ) allocated to each of the 32 core-determining sets by summing probabilities obtained for the eight elementary cones. The probability masses on each elementary cone are obtained by numerical integration after reexpressing the integrand in polar coordinates. In our R code, the numerical integrations are carried out by using the adaptIntegrate function provided in the cubature package (Johnson, 2011). We have also programmed this calculation in MATHEMATICA using the NIntegrate function and an integrand, which is the appropriate bivariate normal density function with values outside the cone of interest set to zero using the Boole function. We obtained very close agreement.
The numerical integrations are necessarily computationally burdensome and some inaccuracy is inevitable, which has a knock-on effect on the determination of membership of projections.
B.2. Calculation of projections
First approximations to the (α 0 , α 1 )-projections of identified sets were obtained by evaluating over a coarse grid of values of (α 0 , α 1 ). Refinements were then obtained by using a bisection procedure to search down a sequence of rays defined by angles γ ∈ [0, 2π ], each passing through the probability-generating value (α 0 , α 1 ) = (0, −1), which is known to lie in the projection. Each ray was stepped along until a value of (α 0 , α 1 ) outside the projection was found. A value midway between this value and the last value found in the projection was then evaluated for membership of the projection. By repeated bisection, a good approximation to the position of the boundary of the identified set along the ray under consideration was obtained. Sweeps were also made in directions parallel to the α 0 and α 1 axes to refine the boundary approximations in areas where it was relatively non-linear. These were helpful in confirming the near convexity of the projections, which is sufficient for our bisection-along-rays procedure to give a good view of the entire boundary.
The objective function minimized in (4.5) when determining membership of the identified set is not very well behaved. There are points at which it is not differentiable and there appear to be some places in which there are small jump discontinuities. One difficulty is that the terms G U (S, θ) depend upon eight numerical integrals of bivariate normal density functions, and the inaccuracy in calculating these affects the computation of the minimum in (4.5). The effect is likely to be dependent on the parameter value (α 0 , α 1 ) being considered.
There is plenty of scope for improvement in the numerical procedures employed here. In particular, a very small further investment would deliver a much more efficient method of searching down a ray for an initial point outside the identified set. The method we use relies on the near convexity of the projection There were a few cases in which isolated points appeared to be in the projections. These were examined individually and, in most cases, by choosing different starting points for the parameters θ −c of the minimization, the points were found on recalculation not to be in the projection. The remaining isolated points had a minimized value of the objective function in (4.5) that was very close to zero. The graphs of the identified set shown here were produced by assigning points with values of the minimized objective function less than 0.001 to the projection.
B.3. Graphics
The projections calculated using our approximations are not convex although the departures from convexity are quite small. We do not know whether the projections are, in fact, convex with the non-convexity arising because of approximation errors. In this circumstance, it seems unwise to draw boundaries of projections as the convex hulls of the points calculated to lie in the projections, although in fact there is not so great an error produced by proceeding in this way. The projections drawn in Figures 1 and 2 are alpha-convex hulls calculated using the ahull function provided in the R package alphahull (Pateiro-Lopez and Rodriguez-Casal, 2009) with the alphahull parameter set equal to 5. We experimented with different values of this parameter and found that the differences in the illustrations were minute.
APPENDIX C: IDENTIFICATION IN EXAMPLE 2.1 WITH EXOGENOUS X
Consider the setting of Example 2.1, but where, in addition, X is restricted to be exogenous. Here, we show that the Gaussian random-coefficients probit model is point identifying in this case. | 9,654.4 | 2012-10-23T00:00:00.000 | [
"Mathematics",
"Economics"
] |
Multi-channel EEG recordings during 3,936 grasp and lift trials with varying weight and friction
WAY-EEG-GAL is a dataset designed to allow critical tests of techniques to decode sensation, intention, and action from scalp EEG recordings in humans who perform a grasp-and-lift task. Twelve participants performed lifting series in which the object’s weight (165, 330, or 660 g), surface friction (sandpaper, suede, or silk surface), or both, were changed unpredictably between trials, thus enforcing changes in fingertip force coordination. In each of a total of 3,936 trials, the participant was cued to reach for the object, grasp it with the thumb and index finger, lift it and hold it for a couple of seconds, put it back on the support surface, release it, and, lastly, to return the hand to a designated rest position. We recorded EEG (32 channels), EMG (five arm and hand muscles), the 3D position of both the hand and object, and force/torque at both contact plates. For each trial we provide 16 event times (e.g., ‘object lift-off’) and 18 measures that characterize the behaviour (e.g., ‘peak grip force’).
Background & Summary
The idea of extracting signals related to object manipulation from EEG recordings in humans seems reasonable given that even basic motor tasks engage large parts of the human cortex 1 . It is, however, not known how much information can actually be decoded from EEG. Specifically, it is unclear to what extent it is possible to extract signals useful for monitoring and control of manipulation tasks, for instance, to control an upper limb prosthetic device to generate a power grasp or a pinch grasp involving the thumb and index finger. While successful EEG decoding of reaching trajectories has been reported 2 , this claim is controversial 3 .
We present a dataset that allows critical evaluations of the utility of EEG signals for prosthetic control of object manipulation. It is based on an established and prototypical paradigm to study precision graspand-lift (GAL) of an object, introduced in the early 1980 s by , and subsequently used in thousands of studies.
The correct completion of the GAL task depends on multimodal sensory activity correlated with specific events such as object contact, lift-off, and replacement. This control policy, in which feedforward control routines operate between sensed discrete events, is known as the Discrete Event Sensory Control policy (DESC; refs 7-9). As these events are crucially important for effective GAL, if they cannot be predicted from the EEG signal, than the EEG signal is of limited use for BCI control of robot hand manipulation.
We collected data from twelve participants in the new dataset WAY-EEG-GAL (WAY: Wearable interfaces for hAnd function recoverY, the funding European project), which contains a total of 3,936 ( = 12 · 328) grasp and lift trials. The participant's task in each trial was to reach for a small object, grasp it using their index finger and thumb, and lift it a few centimetres up in the air, hold it stably for a couple of seconds, and then replace and release the object. The beginning of the reach and the lowering of the object was cued by an LED, otherwise the pace of the task was up to the participant. During all trials, we recorded 32 channels of EEG, 5 channels of EMG from the shoulder, forearm, and hand muscles, the position of the arm, thumb and index finger and the object, and the forces applied to the object by the precision grip. We defined 16 behaviourally relevant events and extracted them for every trial. These event times are available along with the scripts used to generate them and the raw data.
In all series, the object's properties were several times changed in a manner that was unpredictable to the participant with respect to weight (165, 330, or 660 g), contact surface (sandpaper, suede, or silk), or both. Such changes are known to induce specific modifications to the required muscle coordination. For example, both grip force and lift force must increase when the object's weight increases, whereas only the grip force must increase when the object's weight is unchanged but the surface friction is decreased. We confirmed that all participants adjusted their fingertip forces according to the object's properties.
The size and richness of this dataset enables investigations of the information content of EEG during dextrous manipulation; for instance, can EEG be used to identify ▪ the intention to reach and grasp? ▪ the hand positions and velocities? ▪ the onset of the load phase, i.e., the participant's intention to apply lifting forces? ▪ when an object is replaced on a support for subsequent release? ▪ that the properties of the object have unexpectedly changed?
In short, are EEG signals reliable for the control of prosthetic devices? Combining the well-defined GAL paradigm with EEG recordings allows investigations into the information content of EEG and has the potential to lead to new EEG-based techniques for prosthetic device control.
Data and scripts have been made available under the terms of Attribution 4.0 International Creative Commons License (http://creativecommons.org/licenses/by/4.0/).
Participants
An ad calling for participation was posted at Umeå University, summarizing basic information about the study and promising 100 SEK per hour for at least two hours. Among those who responded, only righthanded individuals were selected as participants (n = 12, 8 females, age 19-35) and they signed a consent form (included within Supplementary File 1-Information.pdf) in accordance with the Declaration of Helsinki. The experimental procedures were approved by the Ethical Committee at Umeå University. The participants were told that 'the aim is to study how the brain and muscles are coordinated when handling an object'.
Sensors
Four types of carefully placed sensors recorded kinematics, forces, muscle activations, and brain activity. Four 3D position sensors (labelled P1-P4 in Figure 1a; FASTRAK, Polhemus Inc, USA; links to equipment information are provided in Supplementary File 1) recorded the position (XYZ Cartesian coordinates) and orientation (azimuth, elevation, and roll) of the object, the index finger, the thumb and the wrist. On the sides of the object there were two surface contact plates each coupled to a force transducer that recorded 3 force and 3 torque channels (ATI F/T 17; Figure 1d) electromyography (EMG) sensors (Figure 1b), were placed on pertinent right arm muscles, viz., the anterior deltoid, brachioradial, flexor digitorum, common extensor digitorum, and the first dorsal interosseus muscles (Figure 1b). The EEG cap (Figure 1c; ActiCap) recorded from 32 electrodes in a standard configuration (an image file showing the electrode locations and a data file with the channel coordinates are available in Supplementary File 2-Utilities.zip).
Data acquisition
The EEG cap was used in conjunction with a BrainAmp EEG signal amplifier. BrainAmp sampled at 5 kHz and band-pass filtered each channel from 0.016-1,000 Hz. The amplifier software VisionRecorder digitized and filtered the raw EEG data, and passed it to BCI2000 10 for data storage. A target sampling rate of 500 Hz was set in the amplifier software, which used an adapted low-pass filter to prevent aliasing. All other signals were sampled using SC/ZOOM (developed at Department of Integrative Medical Biology, Umeå University). The EMG signals were sampled at 4 kHz, and all others at 500 Hz. In addition to the kinetic and kinematic sensors, we recorded the object's state, i.e., the prevailing contact surface (sandpaper, suede, silk) and weight (165 g, 330 g, 660 g), the state of the LED that indicated to the participant to start and terminate a trial (Figure 1b,d), and the state of the LED that showed the researcher to change contact surfaces.
To enable secure synchronization between SC/ZOOM and the EEG recording system, SC/ZOOM generated a continuous random signal that jumped between 0 and 1 at~4 Hz which was recorded in both systems. By analysing the lags in the cross-correlation of the two respective sync channels, the EEG signals and the SC/ZOOM signals could be synchronized with an error ≤2 ms.
The object
The object to be grasped and lifted was only partially visible to participants (Figure 1b,d). When the object was lifted, a PVC tube (Ø50 mm) that contained the cables from the force transducers and the Polhemus sensors also became visible.
The object's weight and contact surface plates could be changed between trials without changing the object's visual appearance. Changing the surface required researcher intervention. The surfaces were attached to the object by niobium magnets and could easily be replaced. Changing the weight was automated, by activating one or both electromagnets at the bottom of the device (Figure 1d; Supplementary File 3-Weight Changes.avi shows a cutaway view of the weight changing mechanism under the table, for two transitions, from 165 to 330 to 660 g). Several flexible and low-friction PVC rods under the table helped in centre-aligning the object during lifting tasks.
A translucent Perspex rectangle with a centre hole (Ø20 mm) was suspended by rubber bands above the object (Figure 1b,d). A LED mounted in the rectangle could be turned on and off: each trial commenced when the rectangle was illuminated and ended (i.e., the object was to be replaced on the table) when the LED was turned off. The participants were asked to lift the object such that the Polhemus sensor (labelled P1 in Figure 1a) was positioned at the centre of the hole of the Perspex rectangle.
Preparation
The instruction documents used by the researchers and given to the participant before the experiment, respectively, are available in Supplementary File 1. The per-lift instructions to the participants were: Sit close to the table, relax your shoulder and place your upper arm next to your body. The elbow joint shall be higher than the wrist. During performance of the task, the forearm shall not touch the table. Your left arm should rest close to your waist. The red light is the signal to reach out and lift the object. Grasp the object with your thumb and index finger, in the middle of the grey surface and lift the object about 5 cm from the table. You should lift the object into the circle and hold it there until the red light turns off. Place the object on the table and place your arm next to your body. You shall rest your hand on the 'blue surface'-relax your shoulder.
Asking the participants to position the small red sphere on the top of the object in the opening at the centre of the illuminated rectangle provided an (albeit trivial) task objective, when performing the potentially boring task of lifting the object hundreds of times. They were given sound-masking earplugs to wear during the task.
Each experiment was carefully monitored and controlled: One experimenter controlled SC/ZOOM that recorded all non-EEG signals and generated the sync signal, was in charge of changing the surfaces on the object, and made sure that the participant followed the protocol (e.g., by returning the hand to the blue surface after a trial). A second experimenter was in charge of the EEG signals and their recording, which started after he verified the sync signal's appearance.
There were three alternate surface pairs, one for each surface type. During series involving surface change, the researcher replaced the surfaces on the object between every trial, sometimes to the same surface type and the stand with surface plates were kept out of the participant's view. To further eliminate any useful predictive cues, the experimenter always made the same movements and the plates were constructed to be visually practically indistinguishable.
The researcher knew which surface to select based on the lighting pattern of LEDs which were controlled by SC/ZOOM. After replacing the surface, the experimenter pressed a button, which caused www.nature.com/sdata/ SCIENTIFIC DATA | 1:140047 | DOI: 10.1038/sdata.2014.47 SC/ZOOM to generate a random time interval between 0 and 2 s, after which the participant's LED turned on. During trials without surface change, the light would automatically turn on once the participant digits had been at least 15 cm away from the object for 1-3 s. The participant LED turned off automatically after the object had been in the circle for 2 s. A video of an example trial is included as Supplementary File 4-Example Trial.avi. The participant waits with the arm resting on the blue surface while the assistant changes the contact plate. The assistant gets out of the way. The participant watches the LED. It turns on. The participant reaches for the object and grips it with forefinger and thumb. The participant lifts the object, holding the red sphere steady within the circle. The LED turns off. The participant lowers and releases the object in its resting place. The participant retracts the arm, back onto the blue surface and the trial is over.
Series
Each participant performed 5 different types of experimental series. The practice series involved repeated lifting with the object at 330 g to familiarize the participant with the task (the practice series was not included in the extracted data).
The weight series involved 34 lifts with 12 unpredictable weight changes (between 165, 330 and 660 g). Six different weight series schedules were constructed so that the same weight was repeated 1-4 times and then changed. The friction or surface series involved 34 lifts with variable surface friction (sandpaper, suede or silk). Six different series schedules were constructed using the same logic as for the weight series. All sequences and changes were balanced across the constructed series. During all weight series, the contact surfaces were sandpaper and during all surface series, the object's weight was 330 g. The mixed series had 28 lifts including 11 lifts with an unexpected change in the object's weight (to 165 or 330 g; n = 4), contact surface (to sandpaper or silk; n = 3), or both (n = 4).
The final type of series was the friction estimation series. It included up to 34 trials where the participant held the 330 g object in the air and slowly spaced the digits until slip occurred at one of the digits. The friction estimation series did not include EEG recordings.
The data available on figshare (Data Citation 1) includes 10 experimental series from each participant: 6 weight series, 2 friction series and 2 mixed series.
The series schedule is included in Supplementary File 1. A complete account of the series including current and previous weights and surfaces for every trial is provided for each participant in the P.AllLifts structure (described below in the section Data Records).
Data processing
Raw data from SC/ZOOM and BCI2000 was imported to MATLAB to prepare the data records. The maximum of the cross-correlation, applied to the two sync signals (using the function xcorr with a maximum possible time lag of 5,000 samples) indicated the time lag shift, to sync the signals. We also removed unneeded or extra samples at the beginning and end of each series. The only pre-processing done to the data was to remove the mean from the EMG signals. No artefact rejection (blinking, eye movements, etc) was applied to the EEG signals.
Three types of data structures were prepared. The per-series kinematic, kinetic and neurophysiological data were stored in two complementary types of structured data files. The first structure-holistic-simply includes the raw data for each lifting series. The second type of structure-windowed-organizes the series into temporally segmented windows around each individual lift. Each window starts exactly two seconds before the LED that cued the participant turned on, and ends three seconds after this LED turned off.
Derived signals based on grip and load forces were calculated and added to the windowed structures. The total grip force was calculated as (Fz1+Fz2)/2, while the load force was calculated as Fx1+Fx2 (Figure 1a). Per-digit and total forces were calculated, along with the grip force: load force ratios. The ratios were only calculated when the absolute load force was >0.1 N.
A third data structure contains high-level information about each lift, such as the object's surface and weight properties, these properties for the previous lift in the series, a set of extracted event times, and a set of measures that characterize the behaviour.
Event extraction
Events structure the lift sequence, as seen in Figure 2. These events include the LED turning on and off, the index finger and thumb first making contact with the object, the onset of the load phase, lift-off, object placement on its support, object release, and the hand returning to the blue surface. The time of these (and more) events, and other related information (such as the duration of the various phases, between certain events), was extracted for every lifting trial and included as part of this dataset. 43 pieces of per-trial information are stored. The methods of computation of most of these components are easily inferred from the short descriptions provided (the script file WEEG_FindEvents.m, included in Supplementary File 2, provides all details). For all events, extensive inspections of both recorded timeseries and histograms of the identified events confirmed that the algorithms worked as intended.
To identify many events a combination of 1st and 2nd time derivatives of the pertinent signals were employed. Before computing these derivatives, all signals were subjected to Savitzky-Golay filtering. For instance, to obtain the derivatives of the signal X, the following MATLAB code was used: with SGF_WIN = 31 and dt = 1/sampling rate. To find the time of onset of the hand movement, the tangential velocity was calculated: HandVel ¼ sqrt dX:^2 þ dY:^2 þ dZ:^2 ð Þ ; The moment when HandVel reached 1 cm/s was then defined as the onset of movement.
To identify the moment of touch, we used the moment when the normal force had increased above 4 times the standard deviation of the normal force during hand movements.
A more complex algorithm was required to identify the onset of the load phase. Often participants moved along an upward convex trajectory towards the object and therefore tended to apply a downward tangential force when they initially grasped the object, i.e., they generated an initial 'negative load force' (e.g., Figure 2b). To resolve this, the moment of the zero crossing of the 2nd derivative of the load force immediately before the LF had reached 0.2 N was found and this could be used whether the initial LF was positive or negative.
Data Records
Supplementary File 1 contains all the series schedules, listing for each participant the sequence of actual experimental series and in what order they were stored in the data structures. The PDF also includes demographic information and notes about each participant.
The HS_PX_SY.mat file (where X is participant number and Y is series number), contains a structure with all data in a single lifting series. The WS_PX_SY.mat files contains a structure with the data organized in windows around every single lift, to allow easy extraction of single trials. The PX_AllLifts. mat file contains a structure P with information about every lift performed by each participant X, such as the times at which specific events occurred.
For each of the 12 participants, a single P structure is provided, and one HS structure and one WS structure are provided for each series. However, for a single weight series per participant, the non-EEG information was excluded, and is kept secret for a later competition. The total size of all MATLAB data structures, for all participants, stands at~15 GB.
HS_P1_S1.mat-HS_P12_S9.mat (108 files)
Each file contains all data in a single lifting series, in continuous format. For example, HS_P3_S2.mat contains the data for the Series 2 of Participant #3. Basic ID information is in the top level of the structure. hs.name gives the participant initials, and hs.participantnum and hs.seriesnumber give the participants data record number and the number of the series. Each of hs.emg, hs.eeg, hs.kin, hs.env, and hs.misc are substructures with the following fields: .names, .samplingrate, and .sig. Each .sig is a matrix of dimension #samples x #channels and contains the actual data. An identifier for each column of these matrices is found in names. The five matrices of each holistic structure are eeg, containing 32 EEG signals, emg (5 EMG signals), kin (24 position sensor signals and 12 force plate signals), env (the surface and weight signals), and misc (the remaining recorded signals-the surface LED signal, the participant LED signal, the button pressed by the researcher, the magnet signal, and two temperature signals).
HS_P1_ST.mat-HS_P12_ST.mat (12 files) Each of these files contains the eeg matrix, but not emg, kin, env, or misc. The matrix P.AllLifts contains one row for each recorded lifting trial and 43 columns that each represents a variable pertaining to single trials. The names of the columns in P.AllLifts can be found in P.ColNames. Table 1 describes the contents. Note that all times (except StartTime) are relative to the window start.
Data repository
Data are available at figshare (Data Citation 1).
Technical Validation EEG data
During data acquisition, unexpected artefacts in the EEG signals (e.g., 50 Hz electrical noise) and the impedance of each EEG electrode were continually monitored. One experimental series was aborted when noise was evident because of technical problems, and that series was restarted once the problem had been fixed.
That recorded EEG was confirmed to change with behavioural conditions as illustrated for Participant 3 in Figure 3a made contact (tBothDigitTouch, Table 1). With the help of EEGLab 11 , trials with the same weight as in the previous trial were contrasted with those with an unexpectedly higher weight. This participant showed a median time of 166 ms (interquartile range of 90 ms) from object contact to object liftoff, i.e., tLiftOff-tBothDigitTouch, when the object's weight was 165 or 330 g, i.e., the earliest moment the unexpected weight could have been detected by the participant was after~200 ms. Indeed, the EEG changed after this as exemplified for the Pz and C4 channels in power of the alpha (8-13 Hz) and beta For channels C4 and Pz (shown in insets) recorded in Participant 3, trials when the object's weight was the same as in the previous trial (Expected weight, n = 105; blue lines) and unexpectedly heavy (n = 30; red lines) were contrasted using EEGLab 11 . The panels show from top, the power in the alpha and beta bands after sorting the trials by phase at the peak frequency, the average EEG amplitudes, the ERSPs and the ITCs. The colored patches represent 95% confidence intervals. The earliest moment this participant on average could have detected an increased object weight was~200 ms after object contact (i.e., time zero). (c) All participants adapted their grip force to the object's weight, i.e., 165, 330 or 660 g in series with sandpaper surfaces. The different weights thus invoked markedly different fingertip forces in all participants. (d) The grip:load force ratio was the same or declined across the three object weights in all participants, i.e., the force coordination was roughly the same irrespective of the object's weight. (e) In series with the same object weight (330 g) but with contact plates covered with sandpaper, suede or silk, the grip:load force ratio increased with decreasing friction, i.e., in all participants the three contact plates offered different object-fingertip friction and all participants adapted to the prevailing friction. (f,g) When the weight (f) or the contact surfaces (g) was unexpectedly changed between trials, there was a marked change in the load force duration, in the peak grip force and the hold grip force (e.g., all increased when the object had an unexpected increased weight or decreased friction). Data aggregated across all participants. The lines represent the median and the 1st and 3rd quartile, black lines increased weight (f) and increased slipperiness (g) and gray lines decreased weight and slipperiness, respectively, as indicated on the top and bottom axes. (15-25 Hz) bands, the ERSP (event-related spectral perturbation) and the ITC (inter-trial coherence or event-related phase-locking).
EMG data
The quality of each EMG signal, i.e., the amplitude when the corresponding target muscle was activated, was assessed before the lifting trials commenced and continually during the experiments by means of online monitors. When signals deteriorated during an experimental run, notes about this were made (detailed in Supplementary File 1).
Kinematic and kinetic data
The setup was designed to minimize any interference with the 3D position recording system, that is, wood or plastic materials were used whenever possible in the object and the table. The measurement rms errors within the work space were confirmed to be ≤0.1 mm and ≤0.2°for the position and angular readings, respectively. Prior to the experiments, all sensors in the test objects were carefully calibrated.
Behavioural validation
Figure 3c-e demonstrates that all participants adjusted the force coordination to the prevailing weight and friction. Moreover, and importantly, an unpredictable change of the object's weight or surface material, resulted in marked changes but these effects were largely eliminated already in the subsequent lift, i.e., most of the adaptation took place in single trials (Figure 3f,g). The participants' behaviour thus replicates the major findings in previous studies [4][5][6] and the data show that significant behavioural effects were indeed evident in the recorded trials as a consequence of the object's properties.
Usage Notes
All data files (archived, per-participant, in zip format) are available from figshare (Data Citation 1). Several potentially useful MATLAB scripts are archived in Supplementary File 2-Utilities.zip. In this archive file, Usage.txt provides short instructions about using the code. These scripts are additionally made available through GitHub, at https://github.com/luciw/way-eeg-gal-utilities. We provide: WEEG_GetEventsInHS.m returns the times of various events within the series, instead of within the windowed trials. One can select the participant and a particular series type-Weight, Friction, Mixed, or All.
WEEG_PlotLifts.m enables a per-window visualization of a participant's activities. One can select the participants and series to plot. Each subplot shows a different trial and displays three signals: the grip force, the load forces and the hand velocity. Seven events are indicated by dotted vertical lines. The events shown are the time of: LEDon, when the hand starts moving, first contact, liftoff, LEDoff, the object is placed down, and object release. Above each subplot, the weight and surface type are indicated.
WEEG_PlotStats.m displays histograms indicating, per-participant, the time of index finger contact relative to thumb contact, the duration of the preload phase, and the duration of the load phase. The load phase duration is broken into 9 subplots, shown in the 3 × 3 grid. The current weight is shown on the y-axis, while the weight in the previous trials is shown on the x-axis.
WEEG_FindEvents.m is the script used to determine event timings and lift characterizations, and was used to generate the P.AllLifts structure.
The MATLAB data files and scripts described above can be loaded and run with Octave 3.8 or higher (http://www.gnu.org/software/octave/).
The open-source MATLAB software EEGLab 11 can be used to assist in processing the EEG signals. We provide two scripts for importing the data to EEGLab. WEEG_MakeEEGLABDataset.m and WEEG_MakeAllEEGLABDatasets.m convert the EEG and event data for all series into EEGLAB 'sets', for one participant and for all participants, respectively. The file chanlabels_32channel.xyz (in Supplementary File 2) is used to localize the electrode positions in EEGLab, for topographic plots. WEEG_HowToGenerateERSP.txt describes how to use EEGLab to detect event-related spectral perturbation (ERSP) in the WAY-EEG-GAL data.
Characterization of events
When exploring the WAY-EEG-GAL dataset it may be useful to consider that some events are primarily preceded by and others are followed by central nervous system activity. For instance, the reaching phase is reasonably preceded by brain activity that may be reflected in the EEG prior to the initiation of the hand movement, while touching the object gives rise to sensory inputs that may be reflected in the EEG after the event. | 6,420.4 | 2014-11-25T00:00:00.000 | [
"Computer Science"
] |
Analysis of long‐term survival in multiple myeloma after first‐line autologous stem cell transplantation: impact of clinical risk factors and sustained response
Abstract The widespread use of high‐dose therapy and autologous stem cell transplantation (ASCT) as well as the introduction of novel agents have significantly improved outcomes in multiple myeloma (MM) enabling long‐term survival. We here analyze factors influencing survival in 865 newly diagnosed MM patients who underwent first‐line ASCT at our center between 1993 and 2014. Relative survival and conditional survival were assessed to further characterize long‐term survivors. Achievement of complete response (CR) post‐ASCT was associated with prolonged progression‐free survival (PFS) in the whole cohort and with significantly superior overall survival (OS) in the subgroup of patients receiving novel agent‐based induction therapy. Landmark analyses performed at 1, 3, and 5 years post‐ASCT revealed that sustainment of any response had a highly significant influence on survival with no significant differences between sustained CR and sustained inferior responses. Furthermore, outcome was independently improved by administration of maintenance therapy. A subset of patients did experience long‐term survival >15 years. However, conditional survival demonstrated a persistent risk of myeloma‐associated death and cumulative relative survival curves did not show development of a clear plateau, even in prognostically advantageous groups. In conclusion, in this large retrospective study, sustained response after first‐line ASCT was found to be a major prognostic factor for OS independent of depth of sustained response. Administration of maintenance therapy further improved outcome, supporting the hypothesis that interventions to prolong responses achieved post‐ASCT may be essential to reach long‐term survival, especially in the setting of persisting residual disease.
Introduction
The outcomes of patients with multiple myeloma (MM) have greatly improved over recent decades following both the widespread use of high-dose therapy and autologous stem cell transplantation (ASCT) and, thereafter, the introduction of novel agents. [1][2][3] Although still considered a largely incurable disease, younger MM patients with lowrisk International Staging System (ISS) scores and no adverse cytogenetic features can now expect to live for 10 years 4,5 raising the question whether cure might be possible in a subset of patients. 6 Achievement of complete response (CR) post-ASCT has been repeatedly shown to be associated with superior prognosis. 7,8 However, CR patients seem to represent a heterogeneous group with those having persistent minimal residual disease (MRD) at higher risk of early relapse. 9,10 Patients who progress early after achieving CR do particularly badly, highlighting
ORIGINAL RESEARCH
Analysis of long-term survival in multiple myeloma after first-line autologous stem cell transplantation: impact of clinical risk factors and sustained response the importance of efforts to prolong response duration. 6,[11][12][13] Whether interventions to deepen or prolong the duration of response, such as maintenance therapy, contribute to improved overall survival (OS) outside the setting of clinical trials remains an open question. Therefore, more detailed information on clinical characteristics of long-term survivors as well as the effect of the depth and duration of response is required.
We here provide real-world data on the outcomes of MM patients treated with upfront ASCT at our center over 22 years as well as a comprehensive analysis of prognostic factors associated with long-term survival.
Patients and Methods
Patients with newly diagnosed MM treated at the University Hospital of Heidelberg, Germany, with high-dose melphalan supported by single or tandem ASCT as part of their first-line therapy between March 1993 and July 2014 were retrospectively analyzed. Patients who underwent their first ASCT as part of a later line of therapy were not considered for this analysis. Melphalan was administered at a dosage of 200 mg/m² body surface area which was reduced to 100 mg/m² in case of severe renal insufficiency (creatinine clearance < 40 mL/min). Novel agentbased induction comprised regimens including either thalidomide, lenalidomide or bortezomib. Response assessment was performed at day 100 after ASCT using EBMT criteria. 14 Additional response assessment according to the IMWG criteria 15 adapted to include the response category minimal response (MR), was available for the subset of patients who started treatment after 2007. Given the significantly smaller number of IMWG evaluable patients, results according to EBMT response criteria are presented, if not otherwise indicated. A subset of patients received maintenance therapy after ASCT, mostly with interferon or thalidomide, according to the treating physician's discretion.
Progression-free survival (PFS) and OS were calculated from the day of first ASCT using the Kaplan-Meier method. Patients proceeding to allogeneic transplantation were censored at that time. Prognostic impact of clinical and therapeutic factors on PFS and OS was evaluated on the basis of hazard ratios (HR) with 95% confidence intervals (CI) from multivariate Cox's proportional hazards regression. Maintenance therapy was considered a timedependent event potentially following ASCT. Year of the first ASCT was centered at the median. In case of missing variables, "available case analysis" was performed, that is, a case was deleted when missing a variable required for a particular analysis but included for analyses in which all required variables were present. No missing value imputation was performed.
Landmark analyses at 1, 3, and 5 years after first-line ASCT were performed to evaluate the impact of possible influence factors, in particular of sustained response, on OS of patients alive at these time points by multivariate Cox's proportional hazards regression models. Patients were differentiated into those with sustained CR (combined with near CR (nCR) in the subgroup analysis of IMWG evaluable patients) at the respective time points, sustained inferior responses, that is, very good partial response (VGPR), partial response (PR), MR or stable disease (SD), (sustained non-CR), those having lost a prior CR or lost a prior inferior response. A sustained response was defined as a response achieved at day 100 after ASCT and absence of relapse/death until the respective landmark. Patients deceased or censored prior to one of these time points were excluded from the respective model. Furthermore, in order to assess the evolution of prognosis over time, conditional survival CS(t|s) which expresses the conditional probability of surviving a further t years, given that the patient has already survived s years, was calculated as the ratio of two Kaplan-Meier estimates Ŝ with � CS(t|s) =̂s (s+t) s(s) . 16 95% CIs for CS(t|s) were calculated using a variation in the standard Greenwood formula for the estimation of CIs in unconditional survival. 17 In order to normalize the observed survival of MM patients S o (t) to the expected survival of the general population S p (t) adjusting for age, sex, and calendar year, the cumulative relative survival function r(t) was calculated as r(t) = S o (t) S p (t) using the R package "periodR" (version 2.0-9) including survival probabilities extracted from period life tables published by the German Federal Statistical Office. 18 All statistical tests were two-sided and P-values <0.05 were considered statistically significant. Calculations were done using the statistical software environment R (version 3.3.2, www.r-project.org) together with the R packages "periodR" (version 2.0-9), and "survival" (version 2.40-1). This retrospective study was approved by the University of Heidelberg's Ethics Committee (S-337/2009).
Results
Patients' characteristics 865 patients with newly diagnosed MM who proceeded to upfront ASCT were included in this analysis. Median age at diagnosis was 56.6 years (range 24-74 years), 509 were male. Novel agent-based induction therapy was administered to 358 patients, 258 patients underwent tandem ASCT. Following ASCT, 386 patients received maintenance therapy, mainly with interferon α or thalidomide. A total of 78 patients proceeded to allogeneic transplantation. Median follow-up was 7.1 years (range 0.1-21.8 years). Furthermore details on patients' characteristics are shown in Table 1, details on induction and maintenance regimens are given in suppl. Table S1.
Median PFS for the entire patient cohort was 2.0 years, median OS was 6.7 years. Assessed by EBMT response criteria, a CR at day 100 post-ASCT was achieved by 76 patients (9.4%) who experienced a median PFS of 2.2 years and a median OS of 7.4 years. In comparison, median PFS in patients with PR was 2.2 years, and 1.6 years in patients with MR; median OS was 6.8 years in PR, 5.7 years in MR, and 0.8 years in PD patients (suppl. Fig. S1). A CR prior to ASCT was achieved by 4.2% of patients. In patients with available response assessment according to IMWG criteria, 15 (3.7%) achieved a CR, 167 (41.0%) a CR or nCR; in the subgroup of patients with novel agent-based induction, CR post-ASCT was achieved by 4.5%, CR or nCR by 48.3%. No significant differences in outcome between CR and nCR patients were observed (PFS: P = 0.90; OS: P = 0.64).
Multivariate risk factor analysis
Multivariate analysis showed that novel agent-based induction (HR 0.58, P < 0.001), administration of maintenance therapy (HR 0.53, P < 0.001) and achievement of CR post-ASCT (HR 0.69, P = 0.01) were significantly associated with prolonged PFS. Older age (HR 1.15, P = 0.01) and thrombocytopenia <150.000/μL (HR 1.48, P = 0.02) at diagnosis were significant risk factors, a negative trend was seen for ISS stage 3 (HR 1.30, P = 0.07). Regarding OS, novel agent-based induction (HR 0.48, P < 0.001) and maintenance therapy (HR 0.48, P < 0.001) were significantly associated with superior survival, whereas age (HR 1.35, P < 0.001) and thrombocytopenia (HR 1.67, Laboratory values assessed at time of diagnosis. For serum β2microglobulin, serum albumin and time from diagnosis to first ASCT median as well as first and third quartiles are given. Table 2). Achievement of CR prior to ASCT did not exert a significant impact on PFS (HR 0.75, P = 0.16) or OS (HR 0.84, P = 0.49) in multivariate analysis. Similarly, having received tandem transplant was not significantly associated with prolonged PFS (HR 0.93, P = 0.46) or OS (HR 0.80, P = 0.10). Subgroup analysis of different modalities of maintenance therapy showed that maintenance therapy with interferon α had a pronounced positive impact on PFS (HR 0.47, P < 0.001) and OS (HR 0.42, P < 0.001), while novel agent based maintenance, largely consisting of thalidomide, failed to reach statistical significance (PFS: P = 0.08; OS: HR 0.80, P = 0.34), see Figure S2. A further subgroup analysis of patients receiving bortezomib or lenalidomide maintenance was not possible due to the small sample size of patients receiving these therapies.
In the subgroup of patients treated with novel agentbased induction, achievement of CR/nCR post-ASCT (IMWG response assessment) was associated with significantly superior PFS (HR 0.44, P < 0.001) and OS (HR 0.44, P = 0.005) possibly reflecting a qualitatively superior response following the use of novel agents. However, in contrast to the overall patient population, administration of maintenance therapy, in this cohort largely thalidomide, did not appear to confer superior PFS and OS in this subgroup analysis (suppl. Table S2).
Landmark analysis
Landmark analyses were performed at 1, 3, and 5 years post-ASCT to evaluate the impact of prognostic variables on OS of patients still alive at these time points and, in particular, to assess the effect of response duration. Sustained CR exerted a highly significant positive impact on survival starting at 1, 3, and 5 years after ASCT (HR 0.29, P < 0.001, HR 0.33, P < 0.001, and HR 0.41, P = 0.007, resp.) as did sustained non-CR (HR 0.35, P < 0.001, HR 0.32, P < 0.001 and HR 0.21, P < 0.001, resp.) (Fig. 1, suppl. Fig. S3). No significant differences were seen between the outcomes of patients with sustained CR compared to those with sustained inferior responses (P = 0.37, P = 0.98, and P = 0.10 for 1, 3, and 5 year landmark analyses, resp.). An independent beneficial effect could be shown at all three landmarks for the administration of maintenance therapy (HR 0.47, P < 0.001, HR 0.47, P < 0.001 and HR 0.56, P = 0.004, resp.) and for novel agent-based induction (HR 0.44, P < 0.001, HR 0.61, P = 0.03 and HR 0.50, P = 0.03, resp.) (Fig. 2, suppl. Table S3). Assessing the impact of different regimens of maintenance therapy, maintenance therapy with interferon α continued to show a pronounced benefit on survival compared to no maintenance (HR 0.42, P < 0.001, HR 0.40, P < 0.001 and HR 0.48, P = 0.001, resp.), while no significant impact could be found for maintenance therapy with novel agents, in our cohort largely thalidomide (HR 0.76, P = 0.29, HR 0.94, P = 0.83 and HR 1.28, P = 0.53, resp.). When analysis was restricted to the subset of patients with novel agent-based induction (IMWG response assessment), sustained non-CR/nCR seemed to be inferior to sustained CR/nCR at a 1-year landmark analysis (HR 2.26, P = 0.01), although caution is advisable due to the small number of events in these subgroup analyses. Landmark analyses thus revealed that a sustained response of any kind appeared to confer a major beneficial effect on survival which was further independently improved by the administration of maintenance therapy.
Conditional survival
We then assessed whether it was possible to determine a minimal survival time which predicted subsequent longterm survival. We therefore calculated the conditional survival CS(t|s) as the probability of surviving a further t years after having already survived s years following ASCT. On analysis of the entire cohort at the time of ASCT (s = 0), 3-year conditional survival CS(3|s = 0) was 74% [95% CI 71%; 77%] and 5-year conditional survival CS(5|s = 0) was 59% [56%; 63%]. While there seemed to be a slight trend towards improved conditional survival over time, no specific minimal survival time of prognostic value for long-term survival could be defined (Fig. 3, suppl. Fig. S4A). Regarding conditional survival of specific response groups, no apparent differences could be found between patients with CR or PR after ASCT with CS(3|s = 0) being 82% [73%; 91%] for CR and 77% [74%; 81%] for PR patients. In contrast, patients with PD at day +100 post-ASCT had a much lower probability of surviving the following 3 years after ASCT compared to responding patients with a CS(3|s = 0) of only 25% [8%; 42%]. At 1 year post-ASCT, however, the conditional survival CS(3|s = 1) of the subgroup of patients with PD at day +100 post-ASCT but still alive 1 year post-ASCT increased to 58% [27%; 90%] and was thus similar to conditional survival of patients with initial PR (CS(3|s = 1) = 72% [68%; 76%]) or CR (CS(3|s = 1) = 72% [61%; 83%]) at that time point (suppl. Fig. S4B).
In summary, assessment of conditional survival revealed that, in our patient cohort, the likelihood of ongoing survival remained relatively stable over time with no evidence of a significantly improved prognosis after a certain time point, once again highlighting the importance of response duration.
Relative survival
In addition, the relative survival of MM patients was calculated by normalizing against the expected mortality rate in the corresponding general age-and sex-matched population. Relative survival was assessed for the entire MM patient cohort as well as for the subgroups of patients with 3-year sustained response, novel agent-based induction therapy, and CR after ASCT (Fig. 4, suppl. Fig. S5). However, no clear plateau suggestive of cure was seen, neither in the overall patient population nor in any of the prognostic subgroups.
Discussion
Improvements in response rates and overall outcomes of MM patients over recent decades have prompted interest in the more detailed characterization of long-term survivors and have raised the question of potential cure. 6,8,19 Modern diagnostic techniques allow for further differentiation of patients with CR into those achieving stringent CR or even MRD negativity, both associated with excellent outcomes. 20,21 In our cohort, achievement of CR post-ASCT was associated with prolonged PFS but failed to reach statistical significance with regard to OS. Similar observations were made by several groups in the era prior novel agents, [22][23][24] possibly indicating that the response obtained by conventional chemotherapeutic agents, though fulfilling the criteria for CR, was of insufficient depth to affect OS. This hypothesis is lent further support by our subgroup analysis of patients who received novel agent-based induction which found that achievement of CR/nCR post-ASCT did, in fact, confer significant improvements in both PFS and OS. Consistently, a recent meta-analysis of 3 first-line MM trials did not find a superior survival of patients with CR compared to PR in the setting of persistent MRD. 25 Residual disease and sustained response Along these lines, detection of persistent MRD has been identified as a risk factor for early relapse from CR. 9,10 Loss of CR is associated with adverse outcome, especially if occurring within the first 12-24 months post-ASCT. 9,13 Patients with high-risk cytogenetic features are more likely to relapse early despite promising response rates. 9,26 In fact, a rapid initial response and rapid subsequent relapse have been observed as features of aggressive disease characterized by a high proliferative index. 27,28 In contrast, it has been suggested that some MM patients with a presumably MGUS-like biology experience excellent survival despite failing to achieve CR. 29,30 It therefore appears that sustainment of response might be at least as important as depth of response. 11,12 In patients treated with the highly aggressive total therapy regimens, sustained CR was associated with excellent survival, whereas patients relapsing from CR experienced worse outcomes than those never achieving CR. 6,11,12 In our cohort of patients treated with first-line ASCT but who received heterogeneous induction and maintenance regimens, sustained response was likewise revealed as a major prognostic factor. The prognostic impact of sustained response remained highly statistically significant in 1, 3, and 5 year landmark analyses suggesting a continued effect. It is worth noting that patients with sustained partial responses also experienced excellent outcomes and in the overall cohort, no significant differences in survival were discernible between patients with sustained CR and sustained inferior responses. In the subgroup of patients with novel agent-based induction, however, there seemed to be a superior outcome in patients with sustained CR/nCR compared to sustained non-CR/nCR. This differential effect seen in novel agent treated patients compared to our overall patient cohort might reflect the greater depth of tumor eradication achieved by novel agent-induced CR compared to CR following conventional chemotherapy. 31
Impact of maintenance therapy
In our analysis, administration of maintenance therapy was found to be of major prognostic significance in multivariate cox analysis, multistate models and landmark analyses. Maintenance therapy has been linked to prolongation of PFS with some studies showing an additional OS benefit. 32,33 The importance of PFS prolongation could be further highlighted in a multistate model of our patient cohort showing time to relapse to be positively associated with post-relapse survival (data not shown). It is worth noting that our landmark analyses found sustained response and maintenance therapy to be of independent prognostic significance.
When maintenance therapy with interferon α and novel agent-based maintenance (largely thalidomide) were assessed separately, the latter failed to show a significant impact on survival in our patient cohort while maintenance with interferon α continued to be a highly significant influence factor on PFS and OS. In clinical trials, thalidomide maintenance did not consistently improve OS 34,35 and, indeed, might even be harmful in the setting of high-risk disease. 36 Regarding maintenance therapy with interferon α, two large meta-analyses of randomized trials showed a significant benefit in terms of time to progression and OS for patients in the interferon α trial arms. 37,38 However, given its toxicity profile as well as the availability of modern agents, its use in MM maintenance therapy has been largely abandoned. 39 While this retrospective analysis is not powered to evaluate different maintenance regimens, the overall impact of maintenance therapy is certainly impressive and highlights the potential of this treatment modality in improving MM survival.
Potential cure of MM
Whether MM might ultimately be cured in a significant number of patients remains a matter of debate. Some authors have reported achievement of a plateau in survival curves suggestive of cure in a subset of patients, especially following intense treatment protocols. 6,8 While some patients in our cohort remained in remission for >15 years following ASCT, they were too few to allow for conservative determination of a clear plateau. It is interesting to note, however, that these long-term survivors included both patients in CR and PR. Assessment of conditional survival showed a trend toward improving prognosis over time, however, no minimal survival time of prognostic value for long-term survival could be identified indicating a persisting risk of MM associated death even more than 15 years following ASCT.
Several factors might account for the lack of a demonstrably cured cohort. As our study population spans more than two decades, only 41.7% of patients received novel agent-based induction therapy, here identified to be a major prognostic factor. Furthermore, as widespread use of novel agents was implemented at our institution starting in 2008, follow-up time of patients treated with novel agents might be, as yet, inadequate to allow for a clear identification of such a cured cohort. This is one of the largest analyses of outcomes and prognostic factors in transplant-eligible MM patients not included in clinical trials. Given the "real world" origin of this data, this analysis is subject to a number of limitations. Our patient cohort is more heterogeneous with respect to both clinical characteristics and treatment approaches than would be found in a clinical trial setting. Certain treatment options, such as a tandem transplant or maintenance therapy, were not administered in a randomized manner but were dependent on the treating physician's discretion. This reflects the evolution of therapeutic strategies and changing availability of novel agents over time. In addition, the CR rate observed in our patient cohort might be underestimated as some patients possibly opted against a bone marrow aspirate required to confirm CR in an out-of-trial setting. To compensate for this, we addressed patients with CR and nCR together in the IMWG evaluable cohort. Furthermore, the datasets of certain variables are incomplete. In particular, cytogenetic data was not available in enough patients to be included in this analysis. On the other hand, our data has the important advantage of being more representative of the general MM patient population as the eligibility criteria employed in clinical trials tend to result in a younger fitter cohort than would be observed in routine clinical approach.
In conclusion, in this large retrospective study, we found sustained response after first-line ASCT to be a strong prognostic factor for OS, not only for those in CR but also for patients with lesser responses. Administration of maintenance therapy further improved outcomes, supporting the hypothesis that interventions prolonging responses achieved post-ASCT are essential to reach long-term survival. This needs to be further investigated in current MRD-driven approaches to determine the roles of duration or depth of response, respectively, in contributing to the long-term achievement of functional cure of MM patients.
Supporting Information
Additional supporting information may be found in the online version of this article: Figure S1. Progression-free survival (A) and overall survival (B) stratified by response achieved after ASCT. EBMT response criteria are applied with CR, complete response; PR, partial response, MR, minimal response, and PD, progressive disease. Due to the very small number of patients with stable disease, data not shown. Figure S2. Simon-Makuch plots of progression-free survival (A) and overall survival (B) stratified by type of maintenance therapy. Simon-Makuch plots show PFS and OS according to no maintenance therapy, maintenance therapy with interferon α or with novel agents (i.e., thalidomide, bortezomib or lenalidomide). Maintenance therapy is assessed as a time-dependent variable thus accounting for an individual's possible change from "no maintenance" to "maintenance" over time. Figure S3. Landmark analysis at 1-year after ASCT. Patients are stratified by sustained complete response (sustained CR), sustained inferior response (sustained non-CR), loss of complete response (lost CR) and loss of inferior response (lost non-CR). Figure S4. 5-year (A) conditional survival for the entire patient cohort as well as 3-year conditional survival stratified by response achieved after ASCT (B). EBMT response criteria are applied with CR, complete response; PR, partial response; MR, minimal response; SD, stable disease, and PD, progressive disease. Figure S5. Relative survival stratified by type of induction therapy (A) and response achieved after ASCT (B). Table S1. Details on induction regimens. Details on the most commonly applied induction regimens in our cohort are given. Table S2. Multivariate analysis of possible influence factors on PFS and OS -subgroup analysis of patients with novel agent-based induction therapy. Table S3. Landmark analyses. Multivariate analysis of possible impact factors on OS is given at 1, 3, and 5 years after ASCT landmarks. | 5,688.2 | 2017-12-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
Responding to aid volatility: government spending on district health care in Zambia 2006–2017
ABSTRACT Background: A corruption event in 2009 led to changes in how donors supported the Zambian health system. Donor funding was withdrawn from the district basket mechanism, originally designed to pool donor and government financing for primary care. The withdrawal of these funds from the pooled financing mechanism raised questions from Government and donors regarding the impact on primary care financing during this period of aid volatility. Objectives: To examine the budgets and actual expenditure allocated from central Government to the district level, for health, in Zambia from 2006 to 2017 and determine trends in funding for primary care. Methods: Financial data were extracted from Government documents and adjusted for inflation. Budget and expenditure for the district level over the period 2006 to 2017 were disaggregated by programmatic area for analysis. Results: Despite the withdrawal of donor funding from the district basket after 2009, funding for primary care allocated to the district level more than doubled from 2006 to 2017. However, human resources accounted for this increase. The operational grant, on the other hand, declined. Conclusion: The increase in the budget allocated to primary care could be an example of ‘reverse fungibility’, whereby Government accounted for the gap left by donors. However, the decline in the operational grant demonstrates that this period of aid volatility continued to have an impact on how primary care was planned and financed, with less flexible budget lines most affected during this period. Going forward, Government and donors must consider how funding is allocated to ensure that primary care is resilient to aid volatility; and that the principles of aid effectiveness are prioritised to continue to provide primary health care and progress towards achieving health for all.
Background
Renewed efforts towards achieving universal health coverage by 2030 are only possible with continued attention and prioritisation of primary health care. The 1978 declaration of Alma-Ata highlights this, stating that primary health care is the key to achieving 'health for all' [1, p. 2]. Primary care is defined as the provision of first contact, person focussed care, that is able to deal with most health needs [2, p. 458]. Despite the recognised importance of primary care, many low and middle-income countries have failed to provide a quality primary care package of essential services to its citizens [3]. Primary care expenditure has not reflected the status given to it in the global community, with funding described as 'insufficient and inconsistent' [4, p. 322].
Primary care has been noted to benefit particularly from the aid effectiveness agenda, with the OECD stating that increased aid coordination is correlated with the increased coverage and use of primary care. Declarations and commitments, including the Paris Declaration of 2005 and the International Health Partnership Plus (IHP+), have enshrined the importance of aid effectiveness, with particular emphasis on the principles of government ownership, alignment and harmonisation [5]. Attempts to implement these principles have included Sector Wide Approaches (SWAps) and specific finance mechanisms, including general budget support and sector budget support, and, at least on paper, have been enthusiastically adopted by both donor and recipient governments [6,7].
In Zambia, donors, known as 'Cooperating Partners' (CPs), up until 2009, channelled funds into a basket mechanism. The term 'basket funding' in Zambia refers to the co-financing of district health services by a number of donors and government using a single set of procedures [8]. Channelling domestic and international funds directly to districts for primary care, through the basket, ensured that CPs' support was aligned with the Government's priority of providing 'equity of access to cost-effective, quality healthcare services as close to the family as possible,' or primary care. The adoption of on-budget support enabled government to exercise strong ownership of the aid in the health sector and provides an example of how the principles of the aid effectiveness agenda can be put in to practice [9].
However, challenges in implementation have been notable. Zambia provides an example whereby donors have withdrawn direct financial aid to Government in response to government corruption. Following a corruption event in 2009, involving Ministry of Health officials, CPs froze funding to the basket mechanism [10]. As a result, the basket mechanism was discontinued because donors were no longer willing to continue to channel funds through the Ministry of Health. Whilst the exact amount of international funding lost is unknown due to a move towards off-budget support, the reduction of development assistance for health from Sweden alone, from USD 8.1 million in 2009 to USD 680,000 in 2010, highlights the changes in the sector as a result of the event [11].
In examples like Zambia, criticism has been levelled at international donors for unduly impacting essential services, including primary care, by freezing funds, reverting to practices that do not reflect the aid effectiveness principles, forcing Government to change its approach to financing [10]. There has been significant discussion over whether the withdrawal of donor funding did impact health financing in the long term, and whether the reneging on aid coordination affected primary care financing. Sufficient time has passed since the 2009 corruption event in Zambia to start to understand how primary care financing changed over the period, in the context of the cessation of donor funding through the district basket mechanism.
By examining the financial allocations for the district level from 2006 to 2017, it is possible to determine how primary care financing changed over the period in which the corruption event occurred. This study aims to examine the budgets and actual expenditure allocated from central Government to the district level, for health, in Zambia from 2006 to 2017, to determine trends in funding for primary care. This analysis will allow us to examine if there were changes in government budgetary allocations and expenditure at district level, in the context of the withdrawal of donor funds.
A better understanding of the interaction between development assistance for health and government expenditure on health will contribute to our understanding of the level of fungibility of resources in the health sector in countries where development assistance for health constitutes a significant share of health spending. Fungibility describes to what extent government health expenditure is replaced by development assistance for health and vice versa.
Methods
Primary care in Zambia encompasses all health services coordinated by 117 District Health Offices (DHOs), which include health services provided by health posts and centres, the community level, and district hospitals [12]. A 'top down' and 'bottom up' budgeting process occurs, whereby districts create costed annual work plans, and the central MoH provides a budget envelope for these plans [13].
The published budgets provide disaggregated data for each district. The district budgets are presented in a disaggregated form: personal emoluments or human resources (HR), health service delivery (HSD) and health systems management (HSM). The HR budget for each district is presented with the other district allocations (HSD and HSM), but is held and disbursed at the central level. The DHOs hold responsibility for managing and coordinating the rest of the budget: primary care HSD and HSM; known as the operational grant.
The budget for drugs is not presented by geography or level of care. Drugs are procured at the central level by the MoH, and pushed to districts, who do not have control over this funding. Instead, DHOs are allowed to use up to 4% of this operational grant to procure emergency drugs. We have decided not to include drugs in the analysis, because of the way the budget is presented.
Financial data were gathered from documents retrieved at the offices of the Zambian MoH and the Ministry of Finance (MoF). Budgetary allocations were taken from annual documents detailing estimates for each calendar year: 'Yellow Book: Estimates of Revenue and Expenditure' [14]; and actual expenditure, the resources spent in the financial year (January-December), were obtained from the annual documents entitled 'Blue Book: Detailed Financial Report on Actual Expenditure' [15].
According to the budget books provided by Government, HR includes wages, allowances and gratuities for individuals working at the primary care (district) level, and this is paid by the central MoH directly to health workers. The allocations are included in this analysis, because these resources are presented specifically for primary care, disaggregated by district. The operational grant (HSD and HSM) is disbursed from the central MoH to the DHOs. From here, the funds are either utilised by the DHO or disbursed to the facility level. Resources allocated to health service delivery are for first level referral, community health services, health centre clinical services, and health centre outreach. Resources allocated to Health System Management are for utilities, supervisory visits, administration, remuneration for contractual personnel and performance assessments. The sub-programmes included under HSD, HSM and HR, are presented each year in the yellow books.
The period of 2006 to 2017 has been chosen to allow for sufficient time to demonstrate the trends in primary care financing. Data collection took place between February and April 2016, in Lusaka, Zambia through manual data entry from books onto Microsoft Excel; and remotely in March 2019. Key informants and representatives from the MoH and the MoF assisted in identifying the necessary documents that contained the information required for this study.
Financial allocations were labeled according to their programme area: HR, HSD or HSM, the district, and year. HSD and HSM have been combined to provide the total operational grant that is disbursed to, and managed at, the district level. In 2013, the Zambian kwacha was rebased so that 1000 ZMK was the equivalent of 1 ZMW [16]. Therefore, figures prior to 2013 were converted into ZMW. The data have been adjusted for inflation using the consumer price index (CPI), with 2010 as the index year [17].
There were several changes to the way budgets were presented and allocated over the time period. The responsibility for primary care was moved from the MoH to the Ministry of Community Development, Mother and Child Health (MCDMCH) for 2013, but was then realigned back to the MoH in 2016 [18]. This changed where in the document the budgets were found, but not the budget lines. New districts have been created over the period of analysis: allocations for each district are included for the years in existence.
Microsoft Excel was used to analyse the data and identify trends over time for human resources and the operational grant. To establish the level of funding for primary care, the district allocations were summed to provide a national and regional picture. Financial data were also analysed per capita. Population figures and estimated growth rates were taken from the Population and Housing Census 2000 and 2010 and projected for each year between 2006 and 2017 [19].
The use of secondary data led to challenges regarding the quality of data. Civil servants in the MoH were hesitant as to whether actual expenditure data existed. Difficulties during the collection and location of data highlighted that detailed financial reports were not regularly compiled and used by the MoH for decision-making purposes, but were available and used by the MoF.
Ethical considerations
No ethical approval was required for the study because it uses publicly available secondary data. Authority to conduct the study was obtained from the Permanent Secretary of the Ministry of Health, who provided support to the primary researcher to access the data in the form of the provision of documents and contacts. Figure 1 demonstrates that the total budget and actual expenditure allocated to the district level increased from 2006 to 2017 by 177% and 165%, respectively. In 2008, 2010, and 2016 there was negative growth in the total budget for districts, declining by 4%, 12%, and 14% from the preceding years and in each case, recovering the following year. Actual expenditure decreased between 2009 and 2010; 2012 and 2013; and 2015 and 2016. The proportion of the total health budget allocated to districts has remained relatively consistent over the period of this analysis, at an average of 31%.
Results
Actual expenditure sharply declined in 2013 by 54%. Numerous attempts have been made to understand this by consulting a variety of government and former government stakeholders. This data is consistent with what is reported in The World Bank Health Sector Public Expenditure Review, which attributed the large decline in expenditure to administrative reforms [20]. In 2013, responsibility for district level health care was transitioned from the MoH to the Ministry of Community Development, Maternal and Child Health. to recover in 2014 and is back up to 97% by 2015. The data demonstrates a substantial increase in the allocation of funds to human resources over the period.
The operational grant, which includes health service delivery and health systems management, is managed by the district and represents the resources that can be managed and spent at the decentralised level. The steep reduction in the budget between 2009 and 2010, and in actual expenditure between 2008 and 2010 in the operational grant coincides with the corruption event that occurred during this period. Following the corruption event and the subsequent cessation of donor funds, the operational grant expenditure does not recover to prior levels.
The decline in the operational grant allocated and spent by districts was experienced across districts, and regardless of population level. Table 1
Discussion
Results from this study could be interpreted as an example of 'reverse fungibility', where withdrawal of development assistance for health was substituted with government spending. In the literature, fungibility has often been studied from the perspective of development assistance replacing government funding, and to what extent development assistance for health contributes to additional health spending. Limited attention, however, has been given to how governments mitigate disruption of development assistance for health. The budget and actual expenditure allocated to districts in 2017 was over double that allocated in 2006: but, the rise in resources for HR more than tripled over the same period, accounting for this increase. The effects of the withdrawal of donor funding following the 2009 corruption event have been greatly debated [21,22]. These results demonstrate that the event did not affect overall resources for districts, but illustrate the changes to primary care over the periodmade by both GRZ and donors.
Although overall resources for districts were unaffected, disaggregation of the data indicates that the withdrawal of funding did effect the operational grants provided for primary health care at the district level. Studies suggest that pooled funds, such as the district basket, are typically earmarked. In the case of Zambia, CPs could stipulate that their funds were used for the operational grant, and not HR. This explains why the withdrawal of funding from the district mechanism would have only affected the operational grant [23,24]. Where these conditions are stipulated, the operational grant, as opposed to HR, will be more sensitive to changes made to levels of funding [25].
Expenditure on human resources continued to increase despite the withdrawal of donor funds. On recognising the severe shortage of health workers in 2009, the Government prioritised HR, increasing the proportion of primary care funding allocated to HR over the period analysed [25]. Budget lines such as HR, infrastructure and drugs are difficult to default -for example, civil servants experience a high level of job security. Once resources are committed to HR, there is little freedom to renege on these commitments.
The decline in the operational grant indicates that districts' ability to make financial decisions have diminished over the period. This is particularly evident from 2012 onwards, where data show that districts were allocated an average budget of ZMW14 per capita (adjusted for inflation), dependent on the district. This is the equivalent of US$ 1.14 per capita, which, according to the 'yellow book', must fund the district management team, district hospital, health facilities and community health posts within the area [26]. In reality, the operational grant per capita spent ranged from an average of US$0.52 in the Western and Copperbelt regions to US$0.83 in the Northwestern regionfurther reducing the ability of districts within these regions to fulfil planned spending on primary care.
It is unclear whether the Government of Zambia purposefully sought to fill the gap left by donors. The swift recovery in financing for primary care in 2011 suggests that the Government were proactive in seeking to fill the gap, but that this did not extend to ensuring that allocations for the operational grant were maintained. This highlights that while Government may have been responding to a reduction in primary care financing in general, they may not have been tracking what the impact of the shift of donor funding from on to off budget was having on specific programme areas, such as the operational grant, and thus, on district's ability to make decisions close to the user.
Decentralisation reforms that were initially introduced in 1992, and subsequently strengthened through the National Decentralisation Policy in 2013, gave districts the autonomy to manage resources for primary care. This was intended to move decision-making closer to the end user, and, in doing so, enhance the quality of care [28]. While financial resources do not always result in increased decision-making authority, the Government's inability to fill the gap left in the allocation for the operational grant that the districts are responsible for could diminish the intention of their decentralisation reforms. A retrospective study of the 1992 decentralisation reforms comments on the level of resource allocation for DHOs resulting in 'moderate choices': the same could be argued of the nominal operational grant provided to districts over the period of this study [27, p. v].
The World Bank Zambia Health Sector Public Expenditure Review showed that cooperating partner support continued to grow even after the 2009 corruption event [18]. In recent years, the majority of the CP funding has been directed to HIV/AIDS and other sexually transmitted diseases: in 2015 to 2016 these two areas accounted for up to 70% of donor funds [28]. CPs that withdrew from the district basket mechanism have continued to fund initiatives supporting primary care throughout this period, such as the DFID Tackling Maternal and Child Undernutrition 2 programme and the USAID funded Systems for Better Health programme in Zambia. However, these have been off budget [29,30]. The shift among donors from a pooled financing mechanism to a clearer project approach may result in negative implications for aid effectiveness. Aid effectiveness rests on five collectively agreed principles of good practice: ownership, alignment, harmonization, mutual accountability and results-based management [31]. However, the principles of ownership, alignment and harmonization are more difficult to uphold when working in a project approach. Previous studies have shown that government ownership over development interventions is restricted when funds are controlled by the donor [32].
Alignment and harmonization, included within the broader term of aid coordination, is complicated by stand-alone projects with a narrow focus, specific reporting requirements, and financial procedures [31]. The use of the off-budget modality during this period has affected Government ownership of the health sector: impacting the ability of Government to plan, coordinate, and implement its chosen priorities in the health sector [32]. This not only reduced aid effectiveness but reduced the efficiency of Government support, through increased duplication and a lack of harmonization amongst actors.
With the aim of encouraging CPs to reintroduce onbudget support; the Government of Zambia have introduced a Government Management Capacity Strategic plan [33]. This has been somewhat successful: in 2016, Sida reintroduced on budget support to the Zambian health sector through the Reproductive, Maternal, Child, and Adolescent Health and Nutrition Programme: channeling earmarked funding through the MoH [18,22]. Yet, while GRZ have been working to return to the arrangement prior to 2009, other donors in the health sector have experienced significant change in the aid policies of their own countries, with a move away from on-budget support and a desire to redefine aid effectiveness [34]. The GRZ may need to adapt to this changed context, focusing on ensuring that all CPs are 'on plan', even if they are not 'on budget'.
Limitations of the study
The results of this study rely on secondary data produced by the Ministry of Finance in Zambia. Discussions with stakeholders in the Ministry of Finance and the Ministry of Health highlighted that the expenditure data retrieved from the blue books were not used, or known about, by those outside of its production. Therefore, it raises questions regarding the extent to which this data is validated by the respective ministries.
A further limitation of this study was the constraints regarding triangulation. In order to strengthen the study, data should be collected from the district level to identify whether data reported at the central level correlate with data at the district level. This would also enable the study to identify patterns in the timing of disbursements, to further understand the bottlenecks in financing for primary care. This study could be complimented by additional studies exploring the decision-making of donors discontinuing support to the district basket, to determine if and how support to primary care in Zambia was continued. Future studies should also consider coupling the quantitative data with qualitative interviews at district and central level to identify the explanatory factors for the shift in donor modalities as well as the increased expenditure on behalf of the government of Zambia.
Conclusions
This study aimed to examine the central budgetary allocations and expenditure at the district level for health in Zambia, to explore how primary care financing changed over the period of analysis, in the context of the 2009 corruption event. This paper demonstrates that while resources allocated and spent at the district level increased from 2006 to 2017, the human resources budget accounted for this increase and the operational grant declined.
This study highlights two important aspects. The first is that the government of Zambia was successful in quickly mitigating the financing gap for human resources for health that occurred as donors withdrew from the joint district funding mechanisms. The second learning is that the increase in government spending on human resources for health took place, at least partly, at the expense of district allocations in the operational grant. While the study did not look specifically into the explanations for this, it is reasonable to assume that human resources for health are inherently less flexible than operational grants, and therefore, that the operational grant, is more sensitive to decisions made by government or donors on funding allocations.
In a situation where on-budget support is no longer a possibility, both recipient and donor governments should look for new ways to implement the aid effectiveness principles, ensuring government ownership and alignment are continually prioritised; and that both are mindful of the effects of changes to funding on flexible budget lines in an effort to continue to provide primary health care and progress towards achieving health for all.
Availability of data and material
All data generated or analysed during this study are included in this published article [and its supplementary information files]. | 5,331.2 | 2020-02-19T00:00:00.000 | [
"Economics",
"Medicine",
"Political Science"
] |
DETECTION OF A HUMAN HEAD ON A LOW-QUALITY IMAGE AND ITS SOFTWARE IMPLEMENTATION
: The paper considers the task solution of detection on two-dimensional images not only face, but head of a human regardless of the turn to the observer. Such task is also complicated by the fact that the image receiving at the input of the recognition algorithm may be noisy or captured in low light conditions. The minimum size of a person’s head in an image to be detected for is 10×10 pixels. In the course of development, a dataset was prepared containing over 1000 labelled images of classrooms at BSTU n.a. V.G. Shukhov. The markup was carried out using a segmentation software tool specially developed by the authors. Three architectures of convolutional neural networks were trained for human head detection task: a fully convolutional neural network (FCN) with clustering, the Faster R-CNN architecture and the Mask R-CNN architecture. The third architecture works more than ten times slower than the first one, but it almost does not give false positives and has the precision and recall of head detection over 90% on both test and training samples. The Faster R-CNN architecture gives worse accuracy than Mask R-CNN, but it gives fewer false positives than FCN with clustering. Based on Mask R-CNN authors have developed software for human head detection on a low-quality image. It is two-level web-service with client and server modules. This software is used to detect and count people in the premises. The developed software works with IP cameras, which ensures its scalability for different practical computer vision applications.
INTRODUCTION
Task of people detecting, counting and recognizing is often arising when developing modern video analytics systems for monitoring of housing and business premises, road infrastructure. Its important subtask is to detect the head of a person who may be far away from the camera or turned back to the camera. The most popular methods work effectively only when a person had turned to the camera by face and a head occupies a significant part of the frame. Examples of such approaches are the Viola-Jones method (Viola et al., 2003) or a detector based on histograms of oriented gradients (HOG) (Dalal et al., 2005). Nowadays reliable methods of detecting and recognizing human faces based on deep learning are being widely studied and applied (LeCun et al., 2015). In this paper we explore various architectures of deep convolutional neural networks for human heads detection. An important area is also the development of software that implements deep learning approaches. For modern applications, it is necessary to analyze and apply the capabilities of popular open-source frameworks, for example, Tensorflow object detection API (Huang et al., 2017) or Mask R-CNN implementation (Waleed, 2017). However, special attention should be paid to the development and design of application systems with which the end user works. He usually wants to see the results of image recognition and the required statistics in convenient form. In such systems, in addition to the object detection module, much attention is paid to image capturing from one or several cameras, developing a database and creating user interfaces.
TASK FORMULATION
This paper considers the detection on two-dimensional images not only face, but head of a person regardless of the turn to the observer. Such task is also complicated by the fact that the image receiving at the input of the recognition algorithm may be noisy or captured in low light conditions. Also, the size of the object (human head) can vary widely in the image. The minimum size of a person's head in an image to be detected for is 10×10 pixels. The Fig. 1 shows examples of image fragments with which the developed detector should work. a b c d Figure 1. Examples of low-quality images for human head detection task
Commission II, WG II/5
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W12, 2019 Int. Worksh. on "Photogrammetric & Computer Vision Techniques for Video Surveillance, Biometrics and Biomedicine", 13-15 May 2019, Moscow, Russia Images of human heads in these images may be very small (Fig. 1a, b), may overlap and apply to people turned their backs to the video camera (Fig. 1c, d).
The main stages of solving the task of human head detection in a low-quality image are: 1) the formation of a suitable dataset; 2) the study of various architectures of convolutional neural networks, allowing to detect the human heads with acceptable quality. For practical applications, it is necessary that the quality measures Precision and Recall (Olson, 2008) exceed the value of 0.9 (with Intersection over Union IoU>0.5). The results of this work are planned to be used primarily for monitoring room attendance. First of all, it is important that the number of false positives be as low as possible (i.e. Precision should be as high as possible). A small number of passes is also desirable, but not so significant, because the system takes the maximum number of people present on the basis of several frames and, if a person was not found on one of the frames, he can be found and counted on others. The mean detection time per frame should not exceed 10 s; 3) software implementation of the system for detecting a human head based on a client-server approach in the form of a web application and testing its performance.
DATASET PREPARATION
In the course of development, a dataset was prepared containing over 1000 labelled images of classrooms at Belgorod State Technological University named after V.G. Shukhov (BSTU n.a. V.G. Shukhov). Images were taken under various lighting conditions and in the presence of interference and noise. The markup was carried out using a segmentation software tool specially developed by the authors (Yudin, 2018). The dataset consists of 1280×720 pixel color images (Fig. 2a). The smallest human head size is 10×10, the biggest size is 150×150. For each of the images a binary mask (reference markup) is assigned (Fig. 2b). If two objects (heads) overlap, then a dividing line with a width of 2 pixels is drawn between them. The number of objects in one image varies from 0 to 140. Training sample size: 500 images. The test sample also includes 500 images.
DEEP NEURAL NETWORKS ARCHITECTURES FOR HUMAN HEADS DETECTION ON A LOW-QUALITY IMAGE
Three architectures of convolutional neural networks were trained for human head detection task: a fully convolutional neural network (FCN) with clustering, similar to that described in , the Faster R-CNN architecture (Ren et al., 2015), and the Mask R-CNN architecture (He et al., 2017).
Fully convolutional neural network (FCN) with clustering
Fig . 3 shows the structure of the detector based on Fully convolutional neural network inspired by (Ronneberger, 2015). The result of the network in the form of a grayscale image is binarized using a manually defined threshold (equal to 100). Then the binarized image is clustered using the fast DBSCAN algorithm (Ester, 1996). The network training process is described in detail in (Yudin et al., 2018). During it the source color image of 1280×720 pixels and the corresponding binary mask of the same size were fed to the network input and output, respectively. Batch size is equal 1.
Faster R-CNN architecture
During training, the network implementation with the Tensorflow object detection API was applied (Huang et al., 2017). Faster R-CNN is more precise detector than SSD (Liu, 2016) or YOLO (Redmon, 2015) architectures so they are not covered in this paper. Before being fed to the network input, the original color image was converted to a size of 1024×1024. Based on the masks we have generated markup corresponding to the format tf.record. Batch size is equal 1. Weights pretrained in COCO dataset (COCO Consortium, 2018) were used for network initialization.
Mask R-CNN architecture
The Mask R-CNN model generates bounding boxes and segmentation masks for each object (human head) in the image. It's based on a ResNet101 backbone and Feature Pyramid Network (FPN) (Fig. 5). Training process is described in detail by (Waleed, 2017). When applying to the input network the original image is converted to an image size of 1024 × 1024. Batch size is also 1. Similarly, with the Faster R-CNN, the network is tuned using weights, pre-trained in COCO dataset. The output is also supplemented with masks for each of the objects contained in the image. Information about masks allows us to make the network more accurate, because in addition to the bounding box of the object, we get its semantic segmentation. This allows filtering false positives of the network. Table 1 shows a performance comparison of the three detectors based on deep convolutional neural networks.
Quality of human heads detection using deep neural architectures
The Mask R-CNN works more than ten times slower than the first one, but it almost does not give false positives and has the precision and recall of head detection over 90% on both test and training samples. The Faster R-CNN architecture gives worse accuracy than Mask R-CNN, but it gives fewer false positives than FCN with clustering. Since the task formulation pays special attention to the quality of object detection and does not impose high demands on the computation speed, the Mask R-CNN architecture is chosen for further use as part of the software application.
Software implementation
Based on Mask R-CNN architecture authors have developed software for human head detection on a low-quality image, the structure of which is shown in Figure 6. It is a two-level webservice with client and server modules. This software is used to detect and count people in the premises. The server module allows us to access video streams of a specified IP cameras list using the rtsp protocol, detect and count human heads using a trained R-CNN Mask neural network, save recognition results to files and a database based on SQLite DBMS, and also generate a log-files with a history of events. Resolution of IP cameras is 1920×1080 pixels. Server hardware includes processor Intel Core i5-4570 3.2GHz with 4 Cores, 8 GB RAM, graphic card NVidia GeForce GTX1080 8Gb. Server operation system is Windows 7. The server module is implemented in Python 3.5 using the vlc, pyqt5, keras, and django libraries. Apache is used as a web server. This solution is cross-platform and can function both under the Windows operating system and Linux. Access to the client module is carried out from any computer connected to the local network of BSTU n.a. V.G. Shukhov by IP address of the server. Updating the results of people counting is done 1 time per 1 minute (can vary depending on the requirements). Client module is developed using Angular framework. When you click on a thumbnail room image the client module shows the result of image recognition by a neural network with the detected human heads and their count.
CONCLUSIONS
The test results show that the usage of deep convolutional neural networks allows us to reliably detect a human head on 2D images regardless of the turn to the observer. The Mask R-CNN architecture demonstrates high accuracy rates even on low-quality images, but imposes significant limitations on the speed of such algorithms. However, a large number of computer vision applications do not require real-time object recognition. The developed software works with IP cameras, which ensures its scalability for detecting queues in buffets, visitors monitoring in retail, detecting pedestrians on the roads using outdoor video cameras, determining the workload of public transport stops, etc. | 2,662.8 | 2019-05-09T00:00:00.000 | [
"Computer Science"
] |
Regulation of actin cytoskeleton architecture by Eps8 and Abi1
Background The actin cytoskeleton participates in many fundamental processes including the regulation of cell shape, motility, and adhesion. The remodeling of the actin cytoskeleton is dependent on actin binding proteins, which organize actin filaments into specific structures that allow them to perform various specialized functions. The Eps8 family of proteins is implicated in the regulation of actin cytoskeleton remodeling during cell migration, yet the precise mechanism by which Eps8 regulates actin organization and remodeling remains elusive. Results Here, we show that Eps8 promotes the assembly of actin rich filopodia-like structures and actin cables in cultured mammalian cells and Xenopus embryos, respectively. The morphology of actin structures induced by Eps8 was modulated by interactions with Abi1, which stimulated formation of actin cables in cultured cells and star-like structures in Xenopus. The actin stars observed in Xenopus animal cap cells assembled at the apical surface of epithelial cells in a Rac-independent manner and their formation was accompanied by recruitment of N-WASP, suggesting that the Eps8/Abi1 complex is capable of regulating the localization and/or activity of actin nucleators. We also found that Eps8 recruits Dishevelled to the plasma membrane and actin filaments suggesting that Eps8 might participate in non-canonical Wnt/Polarity signaling. Consistent with this idea, mis-expression of Eps8 in dorsal regions of Xenopus embryos resulted in gastrulation defects. Conclusion Together, these results suggest that Eps8 plays multiple roles in modulating actin filament organization, possibly through its interaction with distinct sets of actin regulatory complexes. Furthermore, the finding that Eps8 interacts with Dsh and induced gastrulation defects provides evidence that Eps8 might participate in non-canonical Wnt signaling to control cell movements during vertebrate development.
Background
Remodeling of the actin cytoskeleton is critical for mediating changes in cell shape, migration, and adhesion. Actin filament architecture is regulated by a large group of actin binding proteins that modulate actin assembly, disassembly, branching, and bundling [1]. Actin organiza-tion is also regulated by growth factor signals that stimulate the activity of Rho family GTPases, which mediate actin remodeling and formation of stress fibers, filopodia, and membrane ruffles [2]. Although much has been learned about the general properties of actin binding proteins, the mechanisms by which these proteins control actin architecture in vivo are poorly understood.
Eps8 (EGF receptor pathway substrate 8) was originally identified as a substrate of the EGF receptor [3] and is the founding member of a multigene family of Eps8-like proteins named Eps8L1, Eps8L2, and Eps8L3 [4,5]. Eps8 is thought to transduce growth factor signals by acting as a scaffold protein to support the formation of multi-protein signaling complexes that promote the activation of Rho family GTPases. Consistent with this model, studies in Eps8 null fibroblasts showed that Eps8 is required for growth factor-induced Rac activation as well as Racdependent actin remodeling and membrane ruffling [6]. Eps8 is a critical component of a complex that contains the p85 regulatory subunit of phosphoinositide 3-kinase, Abi1, and Sos1, which acts as a guanine nucleotide exchange factor (GEF) for Rac [6,7]. Eps8 interacts directly with Abi1 through its SH3 domain, which possesses a novel peptide binding specificity [8], and this binding is thought to relieve auto-inhibition of Eps8 [9].
Eps8 also directly binds actin, suggesting that it may function by localizing Rac to sites of actin remodeling [10]. Eps8 binds actin through its C-terminal effector domain and expression of the effector region in serum-starved cells elicits Rac-dependent actin remodeling and membrane ruffling [10]. Studies using deletion mutants of Eps8 show that the C-terminal effector domain is required for localizing Eps8 to membrane ruffles and the transduction of signals to Rac [10]. A recent study revealed that Cterminal fragments of Eps8 also possess actin barbed-end capping activity in vitro and can substitute for capping protein in actin-based motility assays, suggesting a mechanism by which Eps8 might regulate actin filament dynamics in vivo [9]. Interestingly, full-length Eps8 on its own lacks capping activity in vitro, but can block actin polymerization in the presence of Abi1 [9]. The capping activity of Eps8 does not require Rac indicating that Eps8 can modulate actin dynamics through Rac-dependent and -independent mechanisms. Together, these data implicate Eps8 as a key regulator of actin filament dynamics and suggest that its activity is modulated through association with distinct sets of interacting regulatory proteins.
Eps8 has also been shown to bind Dishevelled (Dsh) [11], a key regulator of canonical and non-canonical Wnt signaling [12,13]. Dsh is required for the establishment of cell polarity and directed migration during gastrulation in vertebrates [14][15][16]. The mechanism by which Dsh controls cell polarity and migration is unclear, but is hypothesized to involve the modulation of actin dynamics through activation of RhoA and Rac [17,18]. The ability of Eps8 to bind both Dsh and actin and stimulate Rac activation suggests that Eps8 may play an important role in regulating Dsh function during gastrulation, but this possibility has not been investigated.
In this study, we utilized cultured mammalian cells and Xenopus embryos as model systems to investigate the mechanism by which Eps8 regulates actin filament architecture in vivo. Our results provide evidence that Eps8 can stimulate the assembly of distinct types of actin-based structures in cells and that the morphology of the actin structures induced by Eps8 is dependent on its interactions with Abi1. In addition, we show that Eps8 can recruit actin regulatory proteins, such as N-WASP and Dsh, to actin filaments and that mis-expression of Eps8 impairs cell movements during gastrulation in Xenopus embryos. Together, these data suggest that the role of Eps8 in modulating actin organization is multifaceted and is dependent on its participation in several potentially distinct multi-protein actin regulatory complexes.
Enhanced formation of filopodia-like structures in cells expressing Eps8
To gain insights into the role Eps8 plays in regulating actin filament architecture, we examined the effect of increasing Eps8 levels on actin remodeling in mammalian cultured cells. For these studies, we utilized the mouse melanoma cell line B16F1 [19], the human breast cancer cell line MDA-MB231 [20], and the MDA-MB231BO cell line, which is a highly metastatic, bone seeking clone of the parental line [21]. These cells were chosen because they are highly motile and express a variety of cellular protrusions including lamellipodia and filopodia. Control B16F1, MDA-MB231, and MDA-231BO cells stained for actin are shown in Figure 1. We found that expression of a c-myc epitope tagged version of mouse Eps8 (Eps8-myc) in B16F1 cells elicited the formation of filopodia-like structures, which stained brightly with phalloidin ( Figure 1D-I). The filopodia-like structures extended from lateral and dorsal regions of the cell and Eps8 localized along the length of these protrusions and was enriched at their tips ( Figure 1F, inset). Similar results were seen in MDA-MB231 ( Figure 1J-L) and MDA-MB231BO ( Figure 1M-O) breast cancer cells. More than 90% of the transfected cells displayed the actin phenotype shown. We also observed the formation of long, snake-like actin cables in approximately 50% of the MDA-MB231BO cells, which were typically not seen in either B16F1 cells or the parental MDA-MB231 cells.
Abi1 modulates Eps8-dependent actin remodeling
To test whether Abi1 can modulate the activity of Eps8 in cultured cells we examined the effect of co-expressing Eps8 and Abi1 on actin architecture. Similar to data reported previously [9], simultaneous expression of Eps8myc and Abi1-GFP in B16F1, MDA-MB231, and MDA-Eps8 induced actin remodeling in cultured cells MB231BO cells resulted in remodeling of the actin cytoskeleton characterized by formation of cable-like actin bundles within the cytoplasm (Figure 2A-I). The actin cables were typically found at the ventral surface of the cell and displayed few branches. Eps8 and Abi1 colocalized along the length of the actin cables. Interestingly, Abi1 was not enriched with Eps8 in filopodia-like structures ( Figure 2, arrowheads in A-C), suggesting that Abi1 may not contribute to Eps8-function at the plasma membrane. More than 95% of transfected cells displayed the actin phenotype shown. Expression of Abi1 alone (data not shown) or an Abi1 mutant (Abi1DY) unable to bind Eps8 [22] failed to stimulate actin cable formation ( Figure 2J-L), indicating that the ability of Eps8 to induce actin cables is dependent on its interaction with Abi1.
Eps8 induces actin remodeling in Xenopus embryos
To further examine the role of Eps8 in regulating actin architecture, we utilized Xenopus animal cap explants, which provide a powerful system for analyzing protein localization and function in vivo. Animal caps explants are dissected from blastula stage embryos and consist of an outer polarized epithelium and 2-3 layers of non-epi-thelial deep cells. We found that expression of Eps8 has different effects on actin organization in superficial epithelial cells versus deep cells. In control explants, actin filaments are enriched at apical cell-cell junctions in superficial epithelial cells ( Figure 3A) and at the cortex of deep cells facing the blastocoel ( Figure 3E). In superficial epithelial cells, Eps8 expression caused an accumulation of actin filaments at sites of cell-cell contact in apparent association with adherens junctions ( Figure 3B-D, arrowheads). In contrast, Eps8 expression induced the formation of cable-like actin structures within the cytoplasm of deep cells ( Figure 3F-H, arrows) and modified the morphology of actin filaments at the cell cortex ( Figure 3F-H, arrowheads). The morphology and length of the actin structures in deep cells was variable; long, unbranched filaments were observed in cortical regions in association with the free membrane domain that faces the blastocoel, whereas thick actin bundles were often seen throughout the cytoplasm. Staining of animal caps with anti-myc antibodies showed that Eps8-myc localized along the length of actin filaments in both deep and superficial cells ( Figure 3D,H; co-localization appears yellow). Thus, Eps8 associates with actin filaments and can dramatically affect
Abi1 modulates the activity of Eps8 in Xenopus embryos
To test whether Abi1 can regulate Eps8 function in Xenopus embryos, we co-expressed Eps8-myc and Abi1-GFP in animal cap cells and analyzed the localization of Eps8, Abi1, and actin by confocal microscopy. We found that when expressed alone, Abi1-GFP localized to small aggregates found throughout the cytoplasm and did not affect actin organization (data not shown). In contrast, simultaneous expression of Eps8-myc and Abi1-GFP induced the formation of star-like actin structures in superficial epithelial cells of the animal cap ( Figure 4A-C). Actin stars were found at the apical surface and consisted of actin-containing spikes radiating from a central actin foci or short bundle. The actin stars did not appear to protrude from the apical surface and Eps8 and Abi1 co-localized with actin in the stars. Since Eps8 and Abi1 facilitate signaling through Rac in cultured cells we tested whether actin star formation was dependent on Rac. In control animal caps, endogenous Rac was enriched at the cell cortex in association with cell-cell junctions (data not shown). In animal caps expressing Eps8 and Abi1, Rac was not recruited to the actin stars ( Figure 4D-F) suggesting that Rac activity is not required for actin star formation. In agreement with this idea, expression of dominant negative Rac (RacN17) failed to inhibit Eps8/Abi1-induced actin star formation (data not shown). Thus, Abi1 modulates Eps8 activity in Xenopus and Eps8 and Abi1 can stimulate actin remodeling in a Rac-independent manner.
Recruitment of Actin Regulatory Proteins to Eps8/Abi1induced actin structures in Xenopus
Eps8 has been shown to possess Abi1-dependent barbedend capping activity in vitro [9], suggesting that the effects we observed in Xenopus may be due to increased capping of actin filaments. To test this idea, we analyzed whether expression of capping protein induced similar changes in actin organization. Capping protein (CP) is an α/β heterodimer that is thought to provide the major barbed-end capping activity in eukaryotic cells [23,24]. In these experiments, animal caps expressing both the α and β subunits of CP were examined for changes in actin filament distribution. In addition, since both the α and β subunits were GFP-tagged, their expression was confirmed by Western blot analysis using anti-GFP antibodies (data not shown). We found that expression of CP had no effect on actin Abi1 modulates Eps8-induced actin remodeling in a Rac-independent manner organization in animal cap cells (data not shown). In addition, we found that expression of capping protein did not block the formation of Eps8/Abi1-induced actin stars, although low levels of capping protein were found to colocalize with the actin stars ( Figure 5A-D, arrowhead). Thus, the formation of actin stars does not directly correlate with enhanced capping protein activity nor does enhanced capping protein activity affect Eps8/Abi1induced remodeling of the actin cytoskeleton.
To test whether the formation of actin stars involves recruitment of WASP proteins we analyzed the distribution of N-WASP-GFP in animal cap cells expressing Eps8 and Abi1. N-WASP co-localized with Eps8 and actin ( Figure 5E-H), indicating that WASP proteins are recruited to Eps8/Abi1-induced actin structures. We also tested whether N-WASP activity is required for Eps8/Abi1induced actin star formation by co-expressing Eps8, Abi1 and a dominant negative form of N-WASP (N-WASP-CA). We found that N-WASP-CA expression did not significantly alter the actin structures induced by Eps8 and Abi1 (data not shown). These data suggest that Eps8 and Abi1 can recruit actin nucleators to specific sites in the cell, although N-WASP function may not be strictly required for Eps8/Abi1-induced actin remodeling.
Members of the Ena/VASP family are critical regulators of actin filament dynamics and are thought to antagonize actin filament capping at the leading edge of migrating cells [28]. Given this central role, we tested whether increased or decreased Ena/VASP activity would affect Eps8/Abi1-induced actin star formation. Expression of a dominant negative protein (FP 4 -mito-GFP, [28,29]) that specifically neutralizes the function of all Ena/VASP proteins was used to knockdown Ena/VASP activity whereas expression GFP-tagged Xenopus VASP (Xvasp) was used to increase Ena/VASP activity. The ability of the FP 4 -mito dominant negative to mis-localize Ena/VASP proteins in Xenopus was confirmed by showing that it caused the redistribution of endogenous Ena from the cell periphery to the mitochondria surface (data not shown). We found that neither FP 4 -mito-GFP ( Figure 5I-L) nor Xvasp-GFP ( Figure 5M-P) had an effect on the presence of Eps8/ Abi1-induced actin stars. In addition, Xvasp-GFP did not co-localize with the actin stars, indicating that Ena/VASP proteins are not recruited to these actin structures ( Figure 5M-P).
Eps8 recruits Dsh to the membrane and actin filaments
Previous studies have reported that Eps8 can bind the Wnt signaling protein Dsh [11], which is required for the transduction of both canonical and non-canonical Wnt signals [13]. Since Dsh is required for cell polarization and convergent extension movements during gastrulation [14][15][16]30,31], we hypothesized that the formation of an Eps8/Dsh complex may be important for regulating Dsh localization and function during gastrulation. To test this idea, we asked whether Eps8 interacts with Dsh in animal cap cells. When expressed alone, Dsh-GFP displays a punctate cytoplasmic distribution in animal cap explants ( Figure 6A) [32]. Expression of Eps8 caused a dramatic redistribution of Dsh-GFP to the plasma membrane and cytoplasmic actin filaments where it co-localized with actin and Eps8 ( Figure 6B-F). In superficial epithelial cells, Dsh was recruited to cell-cell junctions ( Figure 6B,C; arrow) and in deep cells Dsh was recruited to cytoplasmic actin cables ( Figure 6D-F; arrow) and the cell cortex (Figure 6D-F; arrowhead). Furthermore, we found that epitope-tagged forms of Dsh, Eps8, and Abi1 co-localize in animal cap cells ( Figure 6G-I) suggesting that they can form a tri-complex in vivo. These data provide evidence that Eps8 interacts with and may regulate the distribution and/or function of Dsh through recruitment of Dsh to the membrane and actin filaments.
Identification and developmental expression of Xenopus Eps8
Eps8 can interact with Dsh and is thought to play an important role in regulating actin remodeling in motile cells, raising the possibility that Eps8 might be a key regulator of cell movements during gastrulation in vertebrate embryos. To begin to address the role of Eps8 during embryonic development, we performed in silico analyses to identify the Xenopus ortholog of Eps8. Searches of the TIGR (TC263683) and NCBI (MGC81285; Image 6631907) databases led to the identification of cDNAs that encode Xenopus Eps8 (XEps8). The predicted XEps8 protein shows a high degree of sequence identity with both mouse and human Eps8 and contains the conserved PTB, SH3, and C-terminal effector domains. The developmental expression of XEps8 transcripts was determined by RT-PCR. We found that XEps8 transcripts are provided maternally and are present throughout development (Figure 7A). We also found that XEps8 is expressed in isolated dorsal and ventral marginal zone tissue of gastrula stage embryos and that levels of XEps8 are higher in dorsal marginal regions compared to ventral regions ( Figure 7A). Finally, we probed blots of embryonic lysates with anti-XEps8 polyclonal antibodies and found that XEps8 protein appears as a doublet and is present in unfertilized eggs, gastrula, and neurula stage embryos ( Figure 7B). These analyses show that XEps8 is expressed at the relevant time and place to regulate cell movements during gastrulation.
To test the requirement for XEps8 during development, we utilized a morpholino (MO) antisense oligonucleotide targeted to the 5'-untranslated region to specifically Dsh is recruited to the plasma membrane and actin filaments in response to Eps8 expression knockdown levels of XEps8 protein during development. We found that the XEps8 MO could specifically block the expression of a myc-tagged version of XEps8, but injection of the XEps8 MO into 4-cell stage embryos resulted in embryos with no apparent phenotype (data not shown). The lack of a knockdown phenotype is not surprising since Eps8 -/mice also displayed no obvious phenotype [6]. Since Eps8 is a member of a multi-gene family, we searched TIGR and NCBI databases for additional Xenopus Eps8 genes and found evidence for a second XEps8 gene as well as three XEps8-like genes. Therefore, the lack of a phenotype in XEps8 knockdown embryos is likely due to the expression of multiple XEps8 family members, including XEps8L1, XEps8L2, and XEps8L3, during early development (Roffers-Agarwal and Miller, unpublished results). Thus, assessing the role of Eps8 proteins in Xenopus will require novel knockdown techniques capable of simultaneously and specifically inhibiting the activity of multiple gene products during early development.
Expression of Eps8 disrupts cell movements during gastrulation
Since knockdown experiments produced negative results, we performed mis-expression experiments to test whether altering Eps8 activity would affect cell movements during gastrulation. Synthetic mRNA encoding mouse Eps8-myc or GFP as a control was injected into the equatorial region of both dorsal blastomeres at the 4-cell stage and resulting embryos were then examined for developmental abnormalities. Defects in Eps8-injected embryos were first apparent at stage 10.5 (early gastrula). At this stage, control embryos formed a well-defined dorsal lip indicative of the onset of gastrulation movements and involution of dorsal mesoderm. In contrast, Eps8-injected embryos showed a delay in the formation of the dorsal lip and when observed, the lip was disorganized (data not shown). By stage 12, Eps8-injected embryos displayed a severe delay in blastopore closure and buckling of tissue above the blastopore ( Figure 7D). Eps8-injected embryos eventually complete gastrulation and tadpoles displayed a phenotype including a shortened and arched anteriorposterior axis and head defects ( Figure 7F). The defects caused by Eps8 are dose dependent; low doses (50 pg) of Eps8 result in cyclopia and a shortened A-P axis, moderate doses (200 pg) show varying degrees of cyclopia, microcephaly, and shortening and arching of the A-P axis, and high doses (1 ng) result in varying degrees of anencephaly, shortening and arching of the A-P axis, and spina bifida. Control, GFP-injected embryos appeared normal at all stages examined ( Figure 7C,E). These data are consistent with the idea that Eps8-induced actin re-organization leads to defects in cell movements during gastrulation in Xenopus.
The gross morphological defects caused by dorsal expression of Eps8 could be the result of defects in convergent extension or inhibition of mesoderm development, both of which would give superficially similar phenotypes. In order to distinguish between these two possibilities we performed histological analysis on injected embryos (Figure 7G,H). Histological sections of Eps8-injected embryos demonstrated that notochord, somites, and neural tissue are all present, showing that expression of Eps8 does not globally perturb specification of mesodermal or neural cell fates. Instead, expression of Eps8 resulted in broadening of the notochord along the mediolateral axis and morphological defects in the neural tube and somites. The widening of the notochord is consistent with the idea that expression of Eps8 impairs convergent extension movements of the axial mesoderm.
Analysis of activin-induced elongation of animal cap explants provides a powerful assay for studying the cell movements associated with gastrulation [14,33,34]. In these experiments, the animal pole region of an embryo is removed at the blastula stage and placed in culture. Untreated animal cap explants and caps expressing Eps8 differentiate into atypical epidermis and remain rounded ( Figure 8A,B) whereas addition of recombinant activin induces mesodermal differentiation, convergent extension movements, and elongation of uninjected explants ( Figure 8C). We found that expression of Eps8 inhibits activin-induced elongation of animal cap explants ( Figure 8D). The failure of activin-induced animal caps to elongate was not caused by a block in mesoderm induction since both Xbra (pan-mesoderm) and XmyoD (paraxial mesoderm) were expressed in control and Eps8-injected animal caps following activin treatment ( Figure 8E).
Discussion
Here, we have investigated how Eps8 regulates actin filament architecture and how this activity impacts cell movements during gastrulation. Our results, together with previous studies, provide evidence that Eps8 plays multiple roles in regulating the actin cytoskeleton and that these functions are influenced by the participation of Eps8 in multi-protein actin regulatory complexes.
Based on in vitro studies, Eps8 is hypothesized to promote capping of actin barbed-ends in an Abi1-dependent manner [9]. Our findings suggest that in addition to its proposed role as a barbed end capping protein, Eps8 might play additional roles in regulating actin organization in vivo. This idea is supported by the observation that Eps8 expression resulted in enhanced formation of actin-rich filopodia-like structures in cultured cells and enhanced formation of actin bundles and accumulation of actin at cell-cell junctions in Xenopus embryos. The presence of the filopodia-like structures on the dorsal surface of cells suggests that they are protrusive in nature and do not represent retraction structures, which are typically associated with sites of cell adhesion. Additional studies examining the dynamics of these Eps8-induced structures will help clarify the origin and nature of these structures. In addition, we found that Abi1 modulated Eps8 activity, promoting the formation of actin cables in cultured cells and actin stars in Xenopus, suggesting that Eps8 can regulate actin dynamics through Abi1-dependent and -independent mechanisms. Consistent with this idea, Abi1 did not co-localize with Eps8 at the tips of the filopodia-like structures in cultured cells suggesting that additional regulators of Eps8 remain to be identified.
The correlation between Eps8 expression and enhanced formation of filopodia-like structures and actin cables is consistent with the idea that Eps8 may regulate actin filament elongation in vivo. Regulation of barbed-end elongation and filopodia formation is thought to involve a balance between barbed-end capping and anti-capping activities. Proteins such as CP are hypothesized to block elongation and favor formation of a dendritic network [35], whereas proteins including Ena/VASP proteins, which antagonize capping, are hypothesized to promote actin filament elongation and filopodia formation [28,36,37]. Our work examining the regulation of Eps8 activity by CP, N-WASP, and Ena/VASP in Xenopus yielded largely negative results, however, making it difficult to discern the relative contribution of Eps8 capping activity versus other potential modes of activity in the regulation of actin architecture. Further biochemical analyses will help elucidate the molecular mechanism(s) by which Eps8 regulates actin dynamics in vivo.
Previous work [6,7,9,38] and our results show that the ability of Eps8 to modulate actin organization is regulated by its interaction with distinct binding partners such as Abi1. We found that Abi1 can modulate Eps8 activity in cultured cells and Xenopus embryos. Abi1 binds to the SH3 domain of Eps8 [38,39] and it has been proposed that this binding may alter the conformation or activity of the adjacent actin-binding domain of Eps8 [9]. The mechanism by which Abi1 might regulate Eps8 activity remains unclear, but may involve recruitment of additional regulatory factors such as Dsh, Sos1, and Rac to the Eps8/Abi1 complex [7,38]. In addition, our work shows that N-WASP is recruited to Eps8/Abi1-induced actin stars suggesting that the Eps8/Abi1 complex interacts either directly or indirectly with actin nucleating factors. This idea is supported by the observation that Eps8 can facilitate actin-based motility of N-WASP-coated beads in vitro in the presence of Arp2/3, ADF/cofilin, and profilin [9]. Further studies will be required to examine how Abi1 modulates Eps8 activity and how Eps8 works with Abi1 and other regulatory factors to control actin organization in vivo.
Eps8 has been shown to bind Dsh [11], a component of the Wnt signaling pathway that is required for transduction of canonical Wnt/β-catenin and non-canonical signals [12,13]. Here, we have shown that Eps8 expression recruits Dsh to actin filaments and the cell membrane in Xenopus. These data are significant because the role of Dsh in non-canonical Wnt/Polarity signaling is thought to be dependent on its localization to the membrane and its ability to affect cell polarity and migration through regulation of the actin cytoskeleton [14][15][16][17][18]. Dsh activity during gastrulation is dependent on both RhoA and Rac, and the formin homology protein DAAM1 is required for Dshmediated activation of RhoA [17,18]. However, a link between Dsh and Rac has not been identified. The Eps8/ Abi1/Sos1 complex is required for growth factor stimulated activation of Rac [6], suggesting that Eps8 might provide an important link between Dsh, Rac, and the actin cytoskeleton during development. Consistent with this idea, expression of Eps8 impaired cell movements during gastrulation and Eps8, Abi1, and Dsh co-localize in Xenopus suggesting that these proteins can form a tri-complex in vivo. Interestingly, we did not observe an effect of Eps8 on Dsh-mediated induction of Wnt/β-catenin target genes (siamois and Xnr3, JRA and JRM unpublished results) indicating that Eps8 does not participate in canonical Wnt/βcatenin signaling. Unfortunately, our attempts to analyze the requirement for Eps8 in Xenopus were unsuccessful due to the expression of multiple Eps8 family members during early development. Thus, additional studies are necessary to determine the potential role of Eps8 in the transduction of non-canonical Wnt signals and the potential role of Eps8 family members during gastrulation in vertebrates.
Conclusion
How might Eps8 regulate the actin cytoskeleton in vivo? Our findings together with data from previous studies support the idea that Eps8 might regulate actin architecture in multiple ways. Eps8 can bind to both barbed ends and the sides of actin filaments [9,10] and it is possible that these different modes of actin binding mediate distinct effects on actin architecture in cells. Barbed-end capping activity might regulate actin filament dynamics and stabilize existing filaments whereas an alternative activity might promote the formation and maintenance of actin arrays required for protrusive force generation and cellular structures such as microvilli and filopodia. This idea is consistent with our observation that Eps8 is enriched at the tips of filopodia-like structures and localizes along the length of the filopodia-like structures and actin cables. This model is also in agreement with the observation that Eps8 localizes to microvilli in the intestinal epithelium of C. elegans and knockdown of Eps8 is associated with defects in microvilli formation [40]. The formation of actin cables in cells expressing Eps8 and Abi1 and actin clusters in Xenopus embryos suggests that Abi1 is a critical modulator of Eps8's activity as an actin regulatory protein.
The finding that Eps8 expression impairs cell movements during gastrulation provides further support for this view and underscores the idea that the proper balance of actin assembly, disassembly, and organization is essential for controlling morphogenetic movements during development. Thus, Eps8 has emerged as a critical regulator of actin filament dynamics and further analysis of Eps8 and its binding partners will help shed light on the mechanisms that mediate actin-based motility in vivo.
Cell culture, transfections, and imaging B16F1, MDA-MB231, and MDA-MB231BO cells were grown in DMEM (CellGro) supplemented with 10% FBS (HyClone) at 5% CO 2 . For transfections, cells were plated on acid washed coverslips and transfected with Lipofectamine (Invitrogen). For imaging, cells were washed once with PBS and fixed in 4% formaldehyde in CSK buffer (10 mM Hepes pH 7.5, 150 mM sucrose, mM EGTA, 0.1% Triton X-100) for 15 min. at room temperature. Alternatively, cells were permeabilized with 0.1% Triton X-100 in PEM buffer (10 mM Pipes pH 7.4, 1 mM EDTA, 1 mM MgCl 2 ) for 30 seconds and fixed with prewarmed 4% paraformaldehyde in PEM buffer for 30 min. at 37°C. Fixed cells were then washed three times in PBS + 0.1% Triton X-100 (PBST), and incubated in PBST, 2% BSA, 10% normal goat serum (NGS) to prevent non-specific binding of antibodies. Staining with primary and secondary antibodies was performed in PBST, 2% BSA, 10% NGS for 2 hours at room temperature. Images were collected using a Zeiss spinning disc confocal microscope and digital images were processed using Adobe Photoshop.
For imaging, embryos and explants were fixed in 4% formaldehyde in CSK buffer at room temperature for 30 min., washed three times in PBST, and incubated in PBST, 2% BSA, 10% NGS to prevent non-specific binding of antibodies. Staining with primary and secondary antibodies was performed in PBST, 2% BSA for 2 hours at room temperature. Actin was visualized with Alexa568 phalloidin (Molecular Probes). Images were captured with a Zeiss spinning disk confocal microscope and digital images were processed with Adobe Photoshop.
Protein lysates for Western blots were prepared by homogenizing embryos in ice-cold lysis buffer (20 mM Tris pH 7.5, 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 0.5% Triton X-100) supplemented with protease inhibitors (1 mM PMSF, 1 mM pepstatin, 10 µg/ml leupeptin, and 10 µg/ml aprotinin). Homogenates were cleared by centrifugation at 14,000 rpm for 10 min. at 4°C. SDS sample buffer was added to the cleared lysate and boiled for 4 min. prior to separation by SDS-PAGE. Approximately one embryo equivalent was loaded per lane on 10% gels (BioRad). Proteins were blotted to PVDF membrane (BioRad), blots were blocked in 5% milk in TBS + 0.1% Tween, and probed with anti-XEps8 antibodies (1:2000) for two hours at room temperature. Visualization was performed using a horseradish peroxidase conjugated anti-rabbit secondary antibody (Jackson ImmunoLabs) and enhanced chemiluminescence (Pierce). | 7,081.2 | 2005-10-14T00:00:00.000 | [
"Biology"
] |
Gastric Procathepsin E and Progastricsin from Guinea Pig PURIFICATION, MOLECULAR CLONING OF cDNAs, AND CHARACTERIZATION OF ENZYMATIC PROPERTIES, WITH SPECIAL REFERENCE TO PROCATHEPSIN
Procathepsin E and progastricsin were purified from the gastric mucosa of the guinea pig. They were converted to the active form autocatalytically under acidic conditions. Each active form hydrolyzed protein substrates maximally at around pH 2.5. Pepstatin inhib- ited cathepsin E very strongly at an equimolar concentration, whereas the inhibition was much weaker for gastricsin. Molecular cloning of the respective cDNAs permitted us to deduce the complete amino acid se- quences of their pre-proforms; preprocathepsin E and preprogastricsin consisted of 391 and 394 residues, respectively. Procathepsin E has unique structural and enzymatic features among the aspartic proteinases. Lys at position 37, which is common to various aspartic protein- ases and is thought to be important for stabilizing the
Procathepsin E and progastricsin were purified from the gastric mucosa of the guinea pig. They were converted to the active form autocatalytically under acidic conditions. Each active form hydrolyzed protein substrates maximally at around pH 2.5. Pepstatin inhibited cathepsin E very strongly at an equimolar concentration, whereas the inhibition was much weaker for gastricsin. Molecular cloning of the respective cDNAs permitted us to deduce the complete amino acid sequences of their pre-proforms; preprocathepsin E and preprogastricsin consisted of 391 and 394 residues, respectively. Procathepsin E has unique structural and enzymatic features among the aspartic proteinases. Lys at position 37, which is common to various aspartic proteinases and is thought to be important for stabilizing the activation segment, was absent at the corresponding position, as in human procathepsin E. The rate of activation of procathepsin E to cathepsin E is maximal at around pH 4.0. It is very different from the pepsinogens and may be correlated with the absence of Lys3'.
Native procathepsin E is a dimer, consisting of two monomers covalently bound by a disulfide bridge between 2 Cys3'. Interconversion between the dimer and the monomer was reversible and regulated by low concentrations of a reducing reagent. Although the properties of the dimeric and monomeric cathepsins E are quite similar, a marked difference was found between them in terms of their stability in weakly alkaline solution: monomeric cathepsin E was unstable at weakly alkaline pH whereas the dimeric form was stable. The generation of the monomer was thought to be the process leading to inactivation, hence degradation of cathepsin E in vivo.
The aspartic proteinase family, each member of which has 2 essential aspartyl residues at the active site, includes pepsins (pepsin A, gastricsin, and chymosin), cathepsin E, cathepsin D, and renin in mammals (reviewed in Refs. [1][2][3]. All these enzymes are thought to have diverged from a common ancestor. Significant differences, however, have been observed in their characteristics such as hydrolytic specificity and susceptibility to inhibitors, and this is reflected in the significant variations in primary structure among members of these groups. Therefore, to understand structure-function relationships of aspartic proteinases in greater detail, it was thought to be useful to elucidate the primary structures and enzymatic properties of those aspartic proteinases that have unique characteristics. Cathepsin E represents an important example of such aspartic proteinases. To date, it has been known that cathepsin E is a nonsecretory, intracellular, but non-lysosomal proteinase. Cathepsin E has been isolated from various tissues, such as human (4)(5)(6)(7)(8)(9)(10) and rat (11) gastric mucosa, rabbit (12) and rat (13) spleen, human (14) and rat (15) erythrocyte membranes, and rat neutrophils (16). Although various designations were used previously for the enzyme, the name "cathepsin E" is used at present (16)(17)(18). Cathepsin E is a dimeric enzyme different from other aspartic proteinases. The enzyme has a molecular mass of about 80 kDa, consisting of two identical 40-kDa subunits (9)(10)(11)(12)(13)16). On the other hand, the other aspartic proteinases are single polypeptides of about 40 kDa (1)(2)(3). The enzymatic properties of cathepsin E have been shown to resemble those of pepsins; for example, it has hydrolytic activity at acidic pH, with an optimum at pH 2-3, (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17) and is sensitive to various pepsin inhibitors (7-9, 11, 13-16). Although the physiological role of cathepsin E is still unclear, it has been suggested to play an important role in intracellular processing of proteins and/or peptides (19,20), or in immune functions because of its distribution in lymphoid-associated tissue (9,21).
Structural studies of cathepsin E have not yet progressed to a level comparable with those of pepsinogens mainly because of the difficulty in obtaining a sufficient amount of the native enzyme. Recently, the primary structure of human cathepsin E was deduced from the molecular cloning and analysis of its cDNA (22), and the presence of a pro-peptide was demonstrated by isolation and NH2-terminal sequence analysis of human gastric procathepsin E and cathepsin E (23,24), indicating that autocatalytic activation of procathepsin E is involved in the generation of active cathepsin E (24).
The structural analysis also suggested that the dimeric form is produced by covalent association of two monomers through a disulfide bridge(s) (25,26). Therefore, to understand structure-function relationships of cathepsin E, it is important to clarify the differences in properties between the proenzyme and the active form and between the dimeric and monomeric forms. Further, it also seems useful to compare the primary structure of procathepsin E and some enzymatic properties of cathepsin E with those of other aspartic proteinases, especially the pepsinogens. Therefore, in the present study, guinea pig procathepsin E and progastricsin (type C pepsinogen) were chosen, since rodents are known to contain both proenzymes at high levels in gastric mucosa (11,21) to permit simultaneous purification.
Thus, we have carried out a series of studies including purification, molecular cloning of its precursor, and elucidation of enzymatic properties of cathepsin E. The results show that the primary structure and the process of activation of procathepsin E are markedly different from those of progastricsin and other aspartic proteinases. A notable difference in enzymatic properties was also found between the dimeric and the monomeric forms of cathepsin E.
EXPERIMENTAL PROCEDURES AND RESULTS'
Purification-The results of the purification are summarized in Table I. Procathepsin E and progastricsin were purified simultaneously from guinea pig gastric mucosa (Fig. 5). The level of procathepsin E in gastric mucosa was the highest among the animals examined so far. On the other hand, progastricsin was the predominant pepsinogen species in guinea pig gastric mucosa. Two progastricsin components were resolved by FPLC,' and they had quite similar amino acid compositions. The major component, which was eluted earlier on FPLC, was used for further characterization. Progastricsin became unstable during chromatography on the anion exchanger, in part as a result of its autocatalytic activation.
Each purified proenzyme gave a single protein band upon nondenaturing (Fig. 6) and denaturing (Fig. 3) PAGE. The molecular mass determined by SDS-PAGE under reducing conditions was about 43 kDa for each proenzyme. By contrast, the native procathepsin E was eluted at the position corresponding to a molecular mass of about 80 kDa on gel filtration and gave a band of protein with a similar molecular mass on SDS-PAGE under non-reducing conditions (Fig. 3). There-' Portions of this paper (including "Experimental Procedures," Figs. 5-13, and Tables 1-111) are presented in miniprint at the end of this paper. Miniprint is easily read with the aid of a standard magnifying glass. Full size photocopies are included in the microfilm edition of the Journal that is available from Waverly Press.
* The abbreviations used are: FPLC, fast protein liquid chromatography; SDS, sodium dodecyl sulfate; PAGE, polyacrylamide gel electrophoresis; kb, kilobase(s). fore, procathepsin E was deduced to be a dimer. Procathepsin E is a glycoprotein, and the content of carbohydrate was estimated to be about 4% by weight. The amino acid compositions of procathepsin E and progastricsin were rather similar except for notable differences in the content of a few amino acids, such as Asp, Ser, Pro, Met, and Tyr ( Table 11). The NH2-terminal sequences of about 30 residues of the proenzymes were determined by Edman degradation (Figs. 1 and 8). Only a single residue was identified at each step for both proenzymes. Thus, procathepsin E appears to be composed of identical subunits. Although some common residues were observed, the NH2-terminal sequences of procathepsin E and and Asn3", are potential N-glycosylation sites.
progastricsin are significantly different from each other. Molecular Cloning of cDNAs and Structural Amlysis-Among 2,000 recombinant clones of X g t l O prepared from the gastric mucosa of adult guinea pigs, about 50 clones hybridized very strongly with the radiolabeled 45-base oligonucleotide probe. Five clones were chosen at random, and the inserted DNA fragments were subcloned into pUC18 plasmid and definitively identified by sequence analysis. Since the NH2terminal sequences of both procathepsin E and progastricsin had been determined at the protein level, identification of clones was rather easy. Thus, these five clones were shown to be those of the cDNA for progastricsin. The restriction map and the nucleotide sequence of a typical clone (pGP461) are shown in Figs. 7 and 8, respectively. Using the cDNA for progastricsin as a probe, we rescreened the 50 clones under high stringency conditions. Two clones that did not hybridize under these conditions with the progastricsin cDNA were isolated and found by sequence analysis to be those of procathepsin E. The restriction map and the nucleotide sequence of one of the clones of procathepsin E (pGP477) are shown in Figs. 7 and 1, respectively.
The deduced amino acid sequences of the two proenzymes consist of three regions, i.e. the pre-peptide (signal peptide), the pro-peptide (activation segment), and the active enzyme. The signal peptides are composed of 19 and 16 residues, the pro-peptides are composed of 32 and 49 residues, and the active enzymes were composed of 340 and 329 residues for procathepsin E and progastricsin, respectively. The amino acid residues that are conserved in other mammalian aspartic proteinases were also well conserved in both proenzymes (Fig. 2). However, procathepsin E has some unique structural features. The lysine at position 37 (numbering for pepsinogen A from monkey), which has been suggested to be important for the function of the activation segment (27, 28), is absent. Some deletions and insertions were noted in the pro-peptide and the NH2-terminal region of the cathepsin E moiety as compared with the sequences of other mammalian aspartic proteinases (Fig. 2). Asn-67 and Asn-311 were found to be the potential N-glycosylation sites (Fig. 1). The molecular masses of procathepsin E and progastricsin were calculated to be 40,086 and 41,150 Da, respectively, based on the amino acid compositions deduced from the cDNAs.
Interconversion of the Dimeric and Monomeric Forms of Procathepsin E-The conversion of the dimeric procathepsin E to the monomeric form occurred in the presence of a low concentration of a reducing reagent. A typical result is shown in Fig. 9. The dimer was converted to the monomer to the extent of 30-50% by incubation of the former with 1 mM 2mercaptoethanol, L-cysteine, or reduced glutathione at 37 "C for 20 min. The conversion was complete with any one of these reagents at 10 mM under the same conditions (Fig. 9A). The conversion was reversible, since the dimeric form was regenerated after removing the reducing reagent (Fig. 9B).
Proteolytic activity was not affected by interconversion. When the monomer was carboxymethylated, carboxymethyl-Cys was determined to be 0.88 mol/mol of monomeric procathepsin E. This partially modified monomeric procathepsin E retained complete proteolytic activity. Carboxymethyl-Cys was identified at position 4 from the NH2 terminus of cathepsin E (position 37 of procathepsin E) by Edman degradation. The result thus provided direct evidence that the dimeric form is generated by formation of a disulfide bridge at between the two monomers.
Activation Profile-The profile of activation of the proenzyme was analyzed by SDS-PAGE (Fig. 3). Activation of procathepsin E proceeded autocatalytically under acidic con-ditions; the rate of activation was maximal at pH 4.0 and decreased gradually as the pH was lowered to 2.0 (Fig. 3A). In addition, appreciable activation occurred both at pH 5.0 and 6.0 upon prolonged incubation. The rate of activation did not change when monomeric procathepsin E was activated under the same conditions. Procathepsin E appeared to be directly converted to cathepsin E, since the intermediate form(s) was generated at a very low level (Fig. 3, B and C). The bands of procathepsin E and cathepsin E were detected at positions of 82 and 76 kDa, respectively, after SDS-PAGE under non-reducing conditions, whereas the pro-and active forms gave a band of 43 kDa and a band of 39 kDa, respectively, after SDS-PAGE under reducing conditions. Therefore, the dimeric form was maintained throughout the activation. Isolation and structural analysis of the active form revealed that the site of cleavage upon activation was the L e~~' -A s n~~ bond. Thus, the NH, terminus of cathepsin E is located 4 residues before C Y S~~. The cleavage site was the same when monomeric procathepsin E was activated under the same conditions. In addition, the profile of activation of guinea pig progastricsin was also examined (data not shown). The process was largely similar to that observed for other progastricsins (29), and the major cleavage site to generate gastricsin was the Phe4'-Ser5' bond.
Enzymatic Properties of the Active Forms-Cathepsin E and gastricsin are optimally active at around pH 2.5 toward hemoglobin as a substrate (Fig. 10). Cathepsin E has higher specific activity than gastricsin and porcine pepsin A. Both enzymes are inhibited by pepstatin, a specific inhibitor of aspartic proteinases (Fig. 11). Susceptibility of cathepsin E to pepstatin was the same as that of porcine pepsin A, the inhibition profile indicating the strong equimolar binding of pepstatin to the active site. The susceptibility of gastricsin was about 100 times lower than that of cathepsin E and porcine pepsin A. Low susceptibility has commonly been observed with gastricsins of other animals (30, 31).
Cathepsin E is easily converted to monomers in the presence of a low concentration of a reducing reagent, as was procathepsin E as described in the preceding section. Therefore, the difference in enzymatic properties between the dimeric and the monomeric forms of cathepsin E was investigated. Although the hydrolytic activity against hemoglobin at pH 2.0 was the same for both forms, a slight increase in activity was observed with the monomer at pH 5.0 as compared to the dimer. Such an increase was not observed, however, when the enzyme was assayed with other protein substrates (Table 111). By contrast, a striking difference between the dimeric and monomeric forms was found in terms of stability at weakly alkaline pH (Fig. 4). While the dimer was stable at weakly alkaline pH, the monomer lost its activity very rapidly above pH 7. On the other hand, gastricsin was very unstable at alkaline pH as reported for gastricsins of other animal sources (data not shown).
Expression of the Genes in Various Tissues-Expression of the genes for procathepsin E and progastricsin was examined in various tissues from adult guinea pigs by Northern analysis (Fig. 12). The mRNAs for both enzymes were expressed at a high level in gastric mucosa only. In addition, procathepsin E mRNA was found at a low level in spleen. The predominant species of mRNAs of procathepsin E and progastricsin had the same size of around 1.9 kb. The size is very similar to those of pepsinogen mRNAs of other mammals (32-341, but is different from that of human procathepsin E mRNA which has been shown to range from 2.2 to 3.6 kb (22).
~-K W L G L L G L ---V A l S E ----C L V T I -P~M K V K~M R E N L R E N D l L L D Y L E K H P Y R P T Y K L -L S
. .
-K T F~l~l l -~l~~~. l G O A P --~A~~R~. P . ! S R R E~l R~X l~l A O G~1 T E L W K S O~~~D~
H u m a n E
DISCUSSION
Procathepsin E and progastricsin were purified from the gastric mucosa of guinea pigs. The level of procathepsin E was 4-10 times higher than that in human gastric mucosa (8,10) and was the highest among those reported to date for various animal tissues. The reason for this high level is not clear, but it seems that the gastric mucosa of the guinea pig may serve as a good source of procathepsin E for future studies at the protein level. Progastricsin was found to be the major pepsinogen component rather than pepsinogen A. This result is consistent with the results obtained with rat stomach (32, 35).
The structures and some enzymatic properties of procathepsin E and progastricsin were determined and compared between the two proenzymes and also with those of other which is present in other mammalian aspartic proteinase zymogens, was not found in guinea pig nor in human procathepsin E. The positive charge of the lysine residue has been shown to provide electrostatic stabilization via hydrogen bonding to one of the net negative charges of the two aspartic acids at the active site (27, 28). Therefore, the lysine residue has been suggested to be essential for maintaining the proenzyme in an inactive form, thereby playing an important role in the activation of these aspartic proteinases. The activation of procathepsin E proceeded most rapidly at pH 4.0, and appreciable activation occurred at even higher pH. This phenomenon was markedly different from the pepsinogens which are activated most rapidly at pH 2.0 and below (36). The maximal activation a t weakly acidic pH may be correlated with the absence of L y P in procathepsin E, since electrostatic stabilization is thought to be weak in procathepsin E. Since procathepsin E is a non-secretory intracellular proteinase and since its activation would occur a t physiological pH, the maximum rate of activation a t weakly acidic pH seems to be well adapted to the physiology of the proenzyme. LysBi was conserved in guinea pig progastricsin. When the sequences of the connecting region of the pro-peptide and the cathepsin E moiety of guinea pig and human procathepsin E were compared with those of other aspartic proteinases, deletions of several residues around Cys3' appears to be significant (Fig. 2). The cleavage sites associated with activation in other aspartic proteinases, in particular in pepsinogens A and progastricsins, are located in this area (29) and indicate a high degree of conformational lability (28). Therefore, if the deleted positions of procathepsin E were actually occupied by amino acids, cleavage might occur at these sites after C y P , resulting in the generation of monomeric cathepsin E after activation. Therefore, the deletions may be essential for the cleavage before Cys3' and, thus, for maintaining the dimeric form via a disulfide bond during activation. In progastricsin, the activation segment is composed of 49 residues, the longest among known sequences of mammalian aspartic proteinases. The role of this extended segment, however, remains to be clarified, since the cleavage site for activation is the same as that of rat progastricsin, which has a shorter segment of 46 residues.
With respect to the structure of the cathepsin E moiety, the common residues among other aspartic proteinases, including those around the 2 aspartic acid residues of the active site, are well conserved (Fig. 2). One notable point is the lower level of basic residues as compared with cathepsin D, the other intracellular aspartic proteinase (Table 11). The level is comparable to that in pepsins and gastricsins and this similarity may be correlated with the optimal activity at lower The structure of guinea pig procathepsin E is most similar to that of human procathepsin E with 86 and 84% identity at the nucleotide and the amino acid levels, respectively (Fig. 2). The identity with other aspartic proteinases is less than 60%. The evolutionary relationships among various gastric aspartic proteinase zymogens, including pepsinogens A, prochymosins, and progastricsins, have been deduced (33,37,38). However, the relationship between procathepsin E and these gastric aspartic proteinases and other non-pepsin-type aspartic proteinases, such as cathepsin D and renin, have not been elucidated. Therefore, we constructed a phylogenic tree to examine the relationships among various aspartic proteinases including procathepsin E (Fig. 13). The tree shows clearly that procathepsin E is closer to pepsinogens than are procathepsin D and prorenin.
The generation of a dimer is characteristic of (pro)cathepsin E. The present results provide direct evidence that a disulfide bridge involving Cys3' between the two monomers is responsible for generating the dimer. The interconversion between the dimer and the monomer is reversible: the dimer is easily converted to the monomer in the presence of a low concentration of a reducing agent ( Refs. 25 and 26, Fig. 9A), and the monomer is forced to regenerate the dimer in the absence of a reducing agent (Fig. 9B). Such high susceptibility to a reducing agent is thought to be due to the tertiary structure of procathepsin E, in which the region around Cys3' is presumed to be on the surface of the protein as expected from the tertiary structures of other aspartic proteinases (27, 28). Therefore, it may be reasonable to consider that a reducing agent, such as glutathione, may regulate the interconversion between the two forms in vivo. Indeed, the occurrence of the monomeric form of procathepsin E has been detected in human gastric muco~a.~ The interconversion between the two forms may have little significance in the case of procathepsin E, since no difference in properties was found between the two forms. On the other hand, the conversion of the dimer to the monomer seems critical in the case of cathepsin E. Mon-pH.
omeric cathepsin E is more unstable than dimeric cathepsin E at weakly alkaline pH (Fig. 4). This characteristic is very similar to that of pepsin, although pepsin is inactivated more rapidly at weakly alkaline pH (39). Considering that a drastic conformational change is involved in the process of alkali denaturation of pepsin (40), monomeric cathepsin E may be more susceptible to a conformational change at weakly alkaline pH than is dimeric cathepsin E. Therefore, the dimeric form is thought to be essential for stabilizing cathepsin E.
Thus, the generation of the monomeric form may be important in the degradation of cathepsin E in vivo. On the other hand, cathepsin E, as well as procathepsin E, is rather unstable at weakly acidic pH in both dimeric and monomeric forms, presumably due to autodigestion, since the enzyme has appreciable proteolytic activity under weakly acidic conditions (41) ( Table 111).
The tissue distribution of procathepsin E is rather limited. Northern analysis showed that the level of expression of the gene for procathepsin E is high in gastric mucosa while the mRNA for procathepsin E is just detectable in spleen. This distribution suggests that the enzyme has a role that is correlated with gastric physiology. The proenzyme has been shown to be localized in surface epithelial cells of human (9) and rat (42) stomach. It was suggested that cathepsin E could play a role in gastric mucosal injury (9). Furthermore, preferential expression of procathepsin E in fetal gastric mucosa , R. A., Richards, A. D., Kay, J., Dum, B. M., Wyckoff, J. B., Samloff, 19. Lees, W. E., Kalinka, S., Meech, J., Capper, S. J., Cook, N. D., and Kay, J. 20. Sakamoto, W., Yoshikawa, K., Yokoyama, A,, Tables I and 111. protein substrates including heaoglobln, by the same procedure as described TO examine o f substrate specificity. the enzyme was essayed w i t h vaz'ious above except that ;he concentration of substrate was 1% and that a fluorometric assay ( 5 3 ) was used f~ quantirate trichloroacetic acid-soluble peptides.
1\11 procedures except for FPLC were performed at O-4OC.
Chromatography and gel filtration were carried Out in 0.01 M bodium phosphate buffer, pH 7 . 0 .
Step 1 . preparation of the nucosal Extract -Gastric mucosa (total weight. guinea pigs and homogenized in a Waxing blender with 40 m l of the buffer. The assay mixture Of monomeric cathepsin E contained 0.5 mM 2-mercaptoethanol. Each reaction was stopped by the addition Of 400 p l Of 5% trichlDroaCetic acid. After centrifugation, an aliquot Of the supernatant "1)s subjected to a fluorometric assay with fluorescarnine ( 5 3 ) to determine the amount of trichlo-rOBCetiC acid-Soluble peptides. Activity is expressed relative to rhe activity Of the native dimeric form of cathepsin E against bovine hemoglobin. vhicP was taken 8s-00% and corresponded to the release of 0.31 pmol leucine mi"' heart cytochrome C: OVA, egg albumin; TG, human gama-globulin.
Activity (6)
Northern Blot Analysis subjected to alactrophoreeis in a 1% og11108e gel that contained 1.1% forme-Five pg Of the total RNA from various guinea-pig tissues were denatured and mid*. After the RNA had een transferred to DitrDcelluloSe paper, the paper was hybridized with the 3qP-lebelled &NAP for procathepsin E and progastric-Sin under high-stringency conditions. The sires Of RNA3 were estimated by reference to the mobilities Of fragments Of ADNA generated by digestion with XindlII. | 5,815.6 | 1992-08-15T00:00:00.000 | [
"Biology",
"Chemistry",
"Medicine"
] |
Single Chiral Skyrmions in Ultrathin Magnetic Films
The stability and sizes of chiral skyrmions in ultrathin magnetic films are calculated accounting for the isotropic exchange, Dzyaloshinskii–Moriya exchange interaction (DMI), and out-of-plane magnetic anisotropy within micromagnetic approach. Bloch skyrmions in ultrathin magnetic films with B20 cubic crystal structure (MnSi, FeGe) and Neel skyrmions in ultrathin films and multilayers Co/X (X = Ir, Pd, Pt) are considered. The generalized DeBonte ansatz is used to describe the inhomogeneous skyrmion magnetization. The single skyrmion metastability/instability area, skyrmion radius, and skyrmion width are found analytically as a function of DMI strength d. It is shown that the single chiral skyrmions are metastable in infinite magnetic films below a critical value of DMI dc, and do not exist at d>dc. The calculated skyrmion radius increases as d increases and diverges at d→dc−0, whereas the skyrmion width increases monotonically as d increases up to dc without any singularities. The calculated skyrmion width is essentially smaller than the one calculated within the generalized domain wall model.
Introduction
The individual (single) magnetic skyrmions have attracted considerable attention from researchers assuming potential applications in spintronic and information processing devices [1]. To achieve efficient manipulation of the skyrmion spin textures and to realize skyrmion-based low energy consumption devices, it is essential to understand the magnetic skyrmion stability and dynamics, for instance, in ultrathin ferromagnetic films.
The chiral magnetic skyrmions are a kind of magnetic topological soliton [2] in 2D spin systems characterized by a non-zero skyrmion number (topological charge, degree of mapping) defined as N = d 2 ρm · (∂ x m × ∂ y m)/4π, where m(ρ) = M(ρ)/M s is the unit magnetization vector, M s is the material saturation magnetization, and ρ = (x, y) are in-plane spatial coordinates. The number N = ±1, ±2, . . . is an integer for an infinite film. This topological charge can be interpreted as a quantized flux of the emergent magnetic field [3] through the film surface, Φ = |N|Φ 0 , where Φ 0 = h/e is the flux quantum.
The relativistic Dzyaloshinskii-Moriya exchange interaction (DMI) leads to the stabilization of chiral Neel or Bloch skyrmions with a given sense of the magnetization rotation within their internal configuration [1]. The role of the DMI in skyrmion stabilization was discussed in Refs. [4][5][6][7]. Following the ideas of Dzyaloshinskii [4], in Ref. [5] it was found that adding the term D[m · (∇ × m)] (linear in spatial derivatives of magnetization) to the magnetic energy density of an infinite cubic ferromagnet leads to the stabilization of an inhomogeneous magnetization texture for any finite value of the DMI parameter D. Such terms are allowed in magnetic crystals whose symmetry group lacks the space inversion symmetry operation (e.g., in the B20 cubic crystals MnSi, FeGe, etc. [1]). Then, it was shown [6] that accounting for DMI in the form of the Lifshitz invariants in a bulk uniaxial ferromagnet results in the instability of the uniform ferromagnetic state at D > D c = (4/π) √ AK, where A is the exchange stiffness and K is a uniaxial anisotropy constant. The 1D spin spiral becomes the ground state at D > D c . Therefore, DMI can stabilize 2D vortices (Bloch skyrmions, in modern terminology) for moderate values of D. Ivanov et al. [7] showed that the Bloch skyrmions in infinite films with easy axis anisotropy can be stabilized either by DMI or a high-order exchange interaction. Another kind of single chiral skyrmion (Neel skyrmions) was recently observed at room temperature by Boulle where the unit vector z is normal to the interface. The DMI lowers the skyrmion energy for the proper skyrmion chirality.
In the case of an infinite ferromagnetic film, the critical D value presumably remains the same as for bulk crystals, D c = (4/π) √ AK, although the effective anisotropy constant K is different. The isolated skyrmions are metastable at D < D c at zero external magnetic field, and other configurations (e.g., spin spirals, skyrmion lattices, stripe domains) are stabilized at D > D c [12,13].
In this article, we calculate the magnetic energy of a single chiral skyrmion in ultrathin magnetic film and determine the area of the skyrmion metastability, skyrmion magnetization profiles, and the equilibrium skyrmion radius and width. The case of an effective out-of-plane magnetic anisotropy is analyzed.
Methods
Let us consider an infinite magnetic film with thickness L of about 1 nm, and parameterize the unit magnetization vector by the spherical angles, m = m(Θ, Φ). The spatial distribution of magnetization is assumed to be independent of the thickness coordinate z. The angles Θ, Φ are functions of the polar radius vector ρ = (ρ, φ) located in the film plane. For this kind of magnetization configuration, the total magnetic energy functional is E[m] = L d 2 ρε(m) [6,7], with the energy density where A is the material exchange stiffness, ε DMI is the DMI energy density, with D being the DMI parameter, K u > 0 is the out-of-plane uniaxial anisotropy constant, m z is the magnetization z-component, and ε m is the magnetostatic energy. The interface DMI density is ε DMI (m) = D[m z (∇ · m) − (m · ∇)m z ] for the Neel skyrmions, or ε DMI (m) = D[m · ∇ × m] for the Bloch skyrmions, in thin films of the B20 cubic crystals. The magnetostatic energy ε m (m) is non-local in a general case. The volume and surface magnetic charges contribute to the magnetostatic energy. However, within the limit of ultrathin film, the volume magnetic charges can be neglected, and only surface magnetic charges on the film top/bottom surfaces related to the out-of-plane magnetization component m z contribute to the magnetostatic energy. Then, the magnetostatic energy density can be essentially simplified and written in the local form ε m (m) = µ 0 M 2 s m 2 z /2 [2,7] for both kinds of skyrmion. Therefore, the energy is accounted via an effective uniaxial anisotropy constant K = K u − µ 0 M 2 s /2 > 0. We also define the characteristic magnetic material length l = √ A/K, and the reduced dimensionless DMI strength d = Dl/A. We search for axially symmetric inhomogeneous magnetization configurations (m depends only on the radial coordinate ρ), that is, the magnetization angles are Θ = Θ(ρ), Φ = ϕ + ϕ 0 (ϕ 0 = 0, π for the Neel skyrmions or ϕ 0 = ±π/2 for the Bloch skyrmions). The total skyrmion magnetic energy as a functional of the skyrmion magnetization is represented by the polar magnetization angle Θ(ρ), E = E[Θ(ρ)]. The DMI energy depends on the skyrmion chirality C = ±1, which is defined as C = sin ϕ 0 for the Bloch skyrmions and C = cos ϕ 0 for the Neel skyrmions. The sign of DMI strength D depends on the particular ferromagnetic material. Appropriate choice of the sign of chirality at given D ensures that the product DC corresponds to negative DMI energy. We use the total reduced energy of the radially symmetric Bloch or Neel skyrmion (in units of 2π AL) which depends only on one material parameter: the reduced DMI strength, d.
The simplest magnetization distribution Θ(ρ) = 0 corresponds to the energy E[0] = 0 and describes the magnetic film ground state. However, there are metastable magnetization configurations with non-trivial dependence Θ(ρ), which can be found from the solution of the Lagrange-Euler equation corresponding to the energy functional given by Equation (2). The Lagrange-Euler equation is a non-linear differential equation and cannot be solved analytically. Therefore, we use the different approximate solutions below or trial functions for the skyrmion magnetization profile Θ(ρ). Introducing a trial function (skyrmion ansatz) to the energy functional (2), one can calculate the energy of the skyrmion configuration. The simplest trial function, sometimes used in the theory of domain walls and skyrmions [12], is a linear ansatz: is the skyrmion radius), and Θ(ρ) = 0 otherwise. The simplicity of this ansatz allows conduction of the integration in Equation (2) to get the energy E lin (r s ) = λ + r 2 s − πdr s , where d > 0, λ = 6.154. The skyrmion equilibrium radius r s = R s /l within the model is r s = πd/2, and the skyrmion energy is E lin = λ − π 2 d 2 /4. The linear model predicts that the skyrmion is in a metastable state at d < 2 √ λ/π ≈ 1.58 and that its energy is lower than the energy of the collinear out-of-plane magnetization state at d > 2 √ λ/π. We can write the Lagrange-Euler equation for the function Θ(ρ) to minimize the skyrmion energy (2) using the substitution tan (Θ(r)/2) = exp (− f (r)) [14]. The boundary conditions for the function Θ(ρ) are Θ(0) = π and Θ(∞) = 0 [2,15] or f (0) = −∞ and f (∞) = ∞. We define the skyrmion radius by the equation m z (R s ) = 0, Θ(R s ) = π/2 or f (r s ) = 0, where the reduced radius is r s = R s /l.
The approximate solution of the Lagrange-Euler equation at r >> 1, far from the skyrmion center r = 0, and d = 0, is f (r) = (r − r s ). This is an often-used radial domain wall ansatz taken from the theory of bubble domains in infinite films [16]. This ansatz does not satisfy the boundary condition f (0) = −∞, resulting in singularity of the exchange energy at r = 0. Many authors, including Rohart et al. [17] and Buettner et al. [18], considered the skyrmion magnetization configuration as a circular domain wall (DW) located at the skyrmion radius position R s , described by the singular domain wall ansatz tan (Θ(ρ)/2) = exp (±(ρ − R s )/∆), where ∆ is the wall width. In the limit of large radius skyrmion with a sharp edge R s /∆ >> 1, the radial DW model becomes asymptotically exact. Recently, it was generalized by Kravchuk et al. [15] considering the domain wall width as a variable δ different from its nominal value ∆ = √ A/K. The generalized DW ansatz can be used with caution only within the limit r s /δ >> 1 (i.e., for the large radius skyrmions) if one conducts integration in Equation (2) in the interval r ∈ [r s − δ, r s + δ] near the skyrmion edge. To avoid singularity at the origin r = 0 and describe the whole range of the skyrmion radii r s , we use the trial function f (r) = ln (r/r s ) + (r − r s )/δ suggested by DeBonte [19] to describe the bubble domains in infinite films. Although such a function is not a solution of the Lagrange-Euler equation, it is evident that f (r) leads to finite exchange energy and satisfies the boundary conditions. Below, we use the generalized DeBonte ansatz f (r) = ln (r/r s ) + (r − r s )/δ, where the skyrmion radius r s and the skyrmion width δ are variable and depend strongly on the DMI strength, d. The equalities cos Θ(r) = tanh f (r), sin Θ(r) = 1/ cosh f (r) allow us to calculate the skyrmion energy (2) and find the areas of the skyrmion metastability/stability. We consider that a skyrmion's state is stable when it has the lowest energy (ground state) in comparison with other magnetization states. A skyrmion state is metastable when it corresponds to a minimum of the magnetic energy, however, its energy is higher than that of some other magnetization configurations (a local minimum of the Materials 2018, 11, 2238 4 of 9 energy). The Bloch (Neel) skyrmion energy E(r s , δ) within the generalized DeBonte model is a function of two parameters, r s and δ. Accounting Θ r = −(1/δ + 1/r) sin Θ we rewrite the skyrmion exchange energy in the form The exchange energy (3) was calculated by DeBonte, yielding the simple expression where ξ = r s /∆ ≥ 1 is the reduced skyrmion radius, and 1/∆ = 1/r s + 1/δ is the reduced inverse skyrmion width. In the limit of small radius skyrmion r s → 0 , when the exchange energy dominates over other contributions to the energy density, the exchange energy is reduced to the well-known Belavin-Polyakov soliton limit [20], E ex ( ξ → 1 ) = 4, which is determined solely by the skyrmion charge |N| (|N| = 1 for the skyrmions considered here). The magnetic anisotropy and DMI energy can be represented using DeBonte ansatz as where x = ξ − 1, and the functions F a (x), F(x) are defined as integrals The function F(x) > 0, therefore we chose the sign of dC > 0 and below use the substitution dC → d .
Results and Discussion
The total skyrmion magnetic energy within the model can be represented as function of two variable parameters, ξ and ∆: The equation ∂E(ξ, r s )/∂r s = 0 leads to r s (ξ) = dF(ξ − 1)/2F a (ξ − 1) and allows us to exclude r s from the minimization procedure and write an analytical equation for the equilibrium skyrmion radius as an inverse function of the DMI parameter d(ξ) It immediately follows from Equation (6) that the reduced skyrmion radius ξ is a function of d 2 , ξ = φ(d 2 ), and for large radius skyrmions ξ >> 1, d(ξ >> 1) → d c = 4/π , or the equilibrium skyrmion reduced radius diverges, ξ(d) → ∞ , at d → d c − 0 . In the vicinity of d c , Equation (6) [15]. At ξ >> 1 F a (ξ − 1) = ξ −2 ln (1 + exp (2ξ)) ≈ 2/ξ, F(ξ − 1) ≈ π + O(e −ξ ), the skyrmion energy is essentially simplified, E(ξ, r s ) = 2ξ − πdr s + 2(1 + r 2 s )/ξ, and is reduced to one, accounted in the generalized DW model. We note that the DMI and anisotropy energies are proportional to r s , whereas the exchange energy is not: it contains the term 1/r s even within the simplified DW model. This is in disagreement with the statement by Bernand-Mantel et al. [21] that the exchange energy is linearly proportional to r s . Note that the critical value of d c = 2/π, two times smaller than d c = 4/π ≈ 1.273, was calculated for the isolated chiral skyrmions in infinite films in zero external magnetic field by Kiselev et al. [12], and later this value was corrected by Leonov et al. [13] to be d c = 1.224.
In the limit of small DMI strength d << 1, r s (d) cannot be directly determined from the equation r s (d) = dF(0)/2F a (0) because F(0) = 4 is finite, but F a ( x → 0 ) → ∞ is singular. The non-analytic behavior of the function F a (x) at x → 0 can be approximately presented as F a (x) = F a (1)/x α . To calculate ξ(d), we need to analyze the exchange energy. The approximate Equation (3b) has very good accuracy at ξ ≥ 2, but it predicts a wrong asymptotic behavior at ξ → 1 + 0 and the exact Equation (3a) should be used instead within this limit. We rewrite the exchange energy in the form The functions n (x) are not analytic at x = ξ − 1 → 0 , but it is possible to calculate 0 (0) = π and −1 (x) = −1 (0) + −1 (0)x, −1 (0) = 2, −1 (0) = −π. Therefore, the asymptotic behavior of the function E ex (x) is determined by the first term in Equation (7), Using this expression, we can solve Equation (6) in the limit x = ξ − 1 → 0 and get ξ(d) . Numerical calculation of the asymptote of F a ( x → 0 ) showed that the exponent α = 2/3, and F a (1) = 1.121. Therefore, the skyrmion radius calculated within the generalized DeBonte model at d << 1, r s (d) = ∆(d) = (4/F 3 a (1))d 3 , is essentially smaller than the radius predicted by the generalized DW model. The skyrmion energy is E(d) = 4 1 − F −3 a (1)d 4 . It is slightly higher than the DW model energy E DW (d) = 4 1 − (d/d c ) 2 /2 . This is not a surprise because the DW model containing integration in the vicinity of r s always underestimates the skyrmion energy at d < d c .
The equilibrium skyrmion radius, width, and energy vs. the DMI strength are shown in Figures 1-3 generalized domain wall (DW) ansatz [15] (dashed red line); (3) linear ansatz [12] (dotted blue line). The radius obtained from numerical minimization of the skyrmion energy (2) is shown by deep green squares. Figure 1). The generalized DW model [15] predicts the skyrmion width for intermediate values of d, which is approximately two times larger than one calculated within the generalized DeBonte ansatz (see Figure 2). The skyrmion energies calculated within the DeBonte and DW models are very close for 0 c dd , whereas the linear model [12] overestimates the skyrmion energy up to 50% and predicts the wrong value of c d (see Figure 3). The skyrmion radius s rd ( Figure 1) and skyrmion energy (Figure 3) calculated analytically using the DeBonte ansatz and numerically practically coincide.
Above, we calculated the stability of the chiral Bloch and Neel skyrmion magnetization The DW ansatz and linear skyrmion ansatz result in the incorrect dependence r s (d), especially at small d (d < 1) (see Figure 1). The generalized DW model [15] predicts the skyrmion width for intermediate values of d, which is approximately two times larger than one calculated within the generalized DeBonte ansatz (see Figure 2). The skyrmion energies calculated within the DeBonte and DW models are very close for 0 < d < d c , whereas the linear model [12] overestimates the skyrmion energy up to 50% and predicts the wrong value of d c (see Figure 3). The skyrmion radius r s (d) (Figure 1) and skyrmion energy (Figure 3) calculated analytically using the DeBonte ansatz and numerically practically coincide.
Above, we calculated the stability of the chiral Bloch and Neel skyrmion magnetization configurations in ultrathin films as a function of the DMI strength. The second derivative of the skyrmion energy (5) ∂ 2 E/∂r 2 s = 2F a (x) > 0. Therefore, the sufficient condition of existence of the skyrmion local energy minimum (∂ 2 E/∂r 2 s )(∂ 2 E/∂ξ 2 ) − (∂ 2 E/∂r s ∂ξ) 2 > 0 is satisfied for the skyrmion solution ξ(d), r s (d) within the interval 0 < d < d c . The isolated skyrmions are metastable within a range of the values of d satisfying the inequality d < d c and do not exist at d > d c (the skyrmion minimum transforms to an energy maximum at d = d c ).
To describe skyrmion magnetization analytically we used the DeBonte radial domain wall ansatz [19], the accuracy of which was numerically checked for circular dots in Ref. [22]. The calculated equilibrium skyrmion radius R s (d) and the skyrmion width ∆(d) increase with increasing DMI strength ( Figure 1). However, the continual model becomes inaccurate for the sizes below 1 nm.
The typical values of A= 10 pJ/m and K= 0.1 MJ/m 3 yield the magnetic length l = 10 nm for ultrathin films. The conditions R s (d) ≥ 1 nm, ∆(d) ≥1 nm mean that the continual model can be applied if the reduced DMI strength d ≥ 0.2 or |D| ≥ 0.2 mJ/m 2 in absolute units. Simulations [23] within a discrete model on a simple cubic lattice with period a showed that the skyrmion state collapses to the uniformly magnetized state at R s ≈ (4 ÷ 5)a or R s ≈1.0-1.3 nm for Co. We note that in restricted geometry (circular dots) the skyrmion radius dependence on the DMI strength R s (d) has an inflection point at d ≈ d c [17,24] and the skyrmion width ∆(d) reveals a broad maximum in the vicinity of d c [24]. The typical value of DMI strength, D, accessible in experiments with ultrathin films like X/Co (X = Pt, Ir, Pd) is 1-2 mJ/m 2 [8][9][10][11]. Therefore, all observed Neel skyrmions in these nanostructures are metastable [11], we could calculate l = 6.8 nm and d = 1.231. The skyrmion radius, measured by Lorentz transmission electron microscopy [11], is R exp s = 45 nm, or r exp s = R exp s /l = 6.6, whereas the calculations yielded the value r cal s = 4.1. The agreement of the skyrmion sizes measured by X-ray imaging [8,9] and by our calculations is reasonably good. The skyrmion size measured by Lorentz transmission microscopy was larger than the calculated one. This can be explained by the different mechanisms of image formation in these experiments. The image contrast is proportional to the magnetization out-of-plane component m · z for the X-ray imaging [8-10], whereas the contrast is proportional to the out-of-plane component of the magnetization curl (∇ × m) · z, for Lorentz microscopy imaging. The parameters K and D can be extracted with reasonable accuracy from independent experiments. The exchange stiffness A is poorly defined for ultrathin films with ferromagnetic layer thickness 0.5-1 nm. The skyrmion sizes measured in Refs. [8,9,11] are quite large, 40-65 nm. This means that the DMI parameter d is also large and close to its critical value d c , and the value of the skyrmion radius is very sensitive to the exact value of d (see Figure 1). According to its definition d = |D|/ √ AK, the DMI parameter depends on A. This leads to an uncertainty in the interpretation of the experimental data [8, 9,11]. This uncertainty may lead to the case d > d c for Ir/Co/Pt multilayer films [9]. Decreasing the out-of-plane magnetic field can essentially increase the skyrmion sizes (see Figure 2 in ref [9]), indicating that the single skyrmion state is unstable in zero out-of-plane field. We note that the dependences of the skyrmion radius R s on the DMI strength D for different values of A, simulated in Ref. [11], can be reduced to the universal curve R s (d) presented in Figure 1 if one changes the variable D to dimensionless variable d.
The case of magnetic dots considered in Refs. [14,17,22,24,25] is more complicated because the skyrmion configuration can be the dot ground state. The Neel skyrmions in circular dots can be metastable or stable even at D > D c = (4/π) √ AK. The calculated value of D for a transition between the metastable and stable Neel skyrmions in ultrathin circular dots is 1.5-2 times larger than one for infinite films for weak effective magnetic anisotropy 2K/µ 0 M 2 s << 1 [14]. It was also shown that the Bloch skyrmions can be the dot ground state for in-plane magnetic anisotropy K < 0 and D = 0 [25].
In the investigated case of out-of-plane effective magnetic anisotropy K > 0, the large values of the Dzyaloshinskii-Moriya interaction strength D > D c cause the nucleation of more complicated magnetization configurations (nπ-skyrmions [17], spin spirals, labyrinth domain, etc.), that is, the individual Neel or Bloch magnetic skyrmion state with the topological charge |N| ≈ 1 is no longer metastable.
Conclusions
We found that the isolated Bloch and Neel skyrmions in ultrathin magnetic films are metastable within the range of the DMI strength 0 ≤ d < d c , where d c = 4/π or D c = 4A/πl in absolute units, A is the material exchange stiffness, and l = A/(K u − µ 0 M 2 s /2) is the material magnetic length. The calculated skyrmion radius R s increases as d increases and diverges at d → d c − 0 , whereas the skyrmion width ∆ increases monotonically as d increases without any singularities at d → d c − 0 . The calculated skyrmion width is essentially smaller than the one calculated within the generalized domain wall model. The generalized DeBonte ansatz is a very good approximation to calculate the skyrmion radius, width, and energy. The linear skyrmion model cannot be applied for quantitative analysis of the skyrmion energy and size.
Author Contributions: A.A. performed the numerical simulations; K.G. performed analytical calculations; A.A. and K.G. analyzed the data; K.G. supervised the research and wrote the paper.
Funding: K.G. acknowledges support by IKERBASQUE (the Basque Foundation for Science). This research was funded by the Spanish MINECO grant FIS2016-78591-C3-3-R and the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant No. 644348.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,923.4 | 2018-11-01T00:00:00.000 | [
"Physics"
] |
RoadSegNet: a deep learning framework for autonomous urban road detection
Ground detection is an essential part of the perception system in self-driving cars. The ground can be imagined as a fairly smooth, drivable area that is even textured and easily distinguished from the surrounding area. It can have some common imperfections, like shadows and differing light intensities. In this paper, a comparative study of several deep neural network architectures has been reported that can deduce surface normal information on the classic KITTI road dataset in various challenging scenarios. Our goal is to simplify the task of how the recent methods perceive the ground-related information and propose a solution by testing it on three state-of-the-art deep learning models, which are “Resnet-50,” “Xception,” and “MobileNet-V2” to understand and exploit the capabilities of these models. The main significance of this comparative study has been to evaluate the performance of these networks for edge deployment. So, the tiny DNN model of MobileNet-V2 has been considered, which has approximately 80% fewer tunable parameters as compared to the others. The obtained results show that the proposed networks are able to achieve a segmentation accuracy of more than ~ 96% and that too in various challenging scenarios.
Introduction
Over the course of the last few decades, significant progress has been made in the field of autonomous vehicles, and DARPA has played a significant role in these developments [1]. The self-driving cars have been developed to use various onboard sensors like cameras, LiDARs, and GPS to collectively sense the dynamic environmental landscape and make the necessary decisions for safe navigation, and such systems are called advanced driver assistance systems (ADAS). Now, recent developments in the field of deep learning and multi-sensor fusion techniques have fostered the development of consumerready, safe, and efficient autonomous driving systems [2]. Technologies like multi-modal sensor fusion techniques and artificial intelligence are usually used collectively for the development of perception systems to sense the driving environment, predict the course of the traffic, plan the trajectory or lane assistance, and execute these decisions in the real world. It is desired that these intelligent perception systems be accurate, robust, and real-time. All of these will aid in the development of autonomous intelligent vehicle systems and thus reduce road accidents, decongest the roads, and make commuting much more efficient and economical too.
The present work explores the development of a deep neural network architecture for detecting the drivable road regions in a driving scene. The proposed RoadSegNet uses Google's DeepLavV3+ at its core for the semantic segmentation of the road surfaces. The RoadSegNet typically uses weights from three different pretrained networks, namely the two high-accuracy models of ResNet50 and XceptionNet and one tiny DNN of MobileNet-V2. To train the RoadSegNet, the Vision Benchmark Suite Data Set has been used in the present study.
The study presents a comparative study between the three state-of-the-art DNNs, of ResNet50, XceptionNet, and MobileNet-V2, and uses the DeepMind-V3+ encoderdecoder architecture for the segmentation. Apart from using these pretrained networks for weight initialization, another important aspect is their architecture. All these DNNs have a characteristic architecture and number of training parameters: the ResNet50 has 23 million trainable parameters, the XceptionNet has 22.8 million, and the MobileNet-V2 has only 4.2 million trainable parameters.
The main significance of this comparative study has been to evaluate the performance of these networks for edge deployment. So, the tiny DNN model of MobileNet-V2 has been considered, which has approximately 80% fewer tunable parameters as compared to the others, which makes it perfect for edge deployment. The execution time has also been compared in Table 5, and it can be observed that MobileNet-V2 offers a justifiable time for the segmentation and classification of the roads.
The performance of these trained models has been evaluated using the metrics of global accuracy, weighted IOU, and mean BF score. The trained models offer a global accuracy of between 96 and 97%. It has also been observed that the performance offered by MobileNet-V2, despite being a tiny deep neural network architecture, is comparable with that of XceptionNet and, in some cases, offers better performance than ResNet50.
Related work
The self-driving cars are autonomous decision-making systems, and this self-driving autonomy is divided into five SAE levels. The lower SAE levels offer basic driver assistance features like automatic braking, lane departure warnings, and adaptive cruise control, while the higher SAE levels are aimed at offering driverless navigation in all road conditions [12]. In the 1980s, Ernest Dickmanns developed the first autonomous car [13]. This was followed by various research efforts, like the development of Prometheus [14], VaMP [15], and CMU NAVLAB [16]. These advancements laid the groundwork for self-driving cars. In the early 2000s, DARPA's grand challenges [17] were one of the major turning points in the development of self-driving cars, where machine learning was used for the first time for navigation [18].
As these self-driving cars are autonomous decision-making systems and are being designed to assure road safety and efficient navigation, it is desired that the autonomous vehicle be able to not only perceive the current state of the driving environment, but also be able to foresee future behavior too. So, to estimate the current state and predict the future states of the driving environment, the self-driving vehicles use an amalgamation of onboard sensors like mono and stereo cameras, depth estimation sensors, LiDARs, EMUs, GPS units, and ultrasonic sensors, and based on the sensed data, the autonomous vehicle will make navigational decisions. Broadly, the data from these sensors is primarily used for the following four tasks: (a) perception and localization, (b) high-level path planning, (c) behavior negotiation, and (d) intelligent motion control. These four high-level tasks also need to be monitored for safety. Figure 1 shows the representation of the broad architecture of a perception, planning, and control workflow in autonomous vehicles. Perception and localization are two of the most important tasks to sense the dynamic traffic environment, and they leverage the use of various vehicle sensors. The various methodologies used in road detection are shown in Fig. 2. Some of the sensors used are discussed as follows: • Mono cameras can be used for obstacle detection and classification; they offer a costeffective solution and are good for two-dimensional mapping and lane detection, but they suffer from drawbacks like the fact that they are very sensitive to light and in poor lighting scenarios, like fog and rain; they offer a very poor performance. Also, it is very difficult to perceive the estimation of distance using such cameras. • Stereo-vision cameras provide the same functionality as mono cameras, but they also allow for three-dimensional mapping and depth estimation. However, these cameras are computationally expensive; additionally, velocity and distance estimation cannot be estimated, and, like mono cameras, these are light-sensitive and do not provide good results in challenging lighting. • LiDAR is also used for obstacle detection, robust 3D mapping of the driving scenario and environment using multi-layer LiDAR, direct estimation of the distances, efficacy in light weather conditions, etc., but the object classification is a challenge; some inaccuracies can occur due to reflective surfaces and typically severe weather conditions. • RADAR can be used for obstacle detection; it also provides velocity information; long-and short-range options are available; it detects well in poor weather conditions but performs poorly in terms of classification, static object detection, angular rotation, and interference due to multiple reflective surfaces. • Other sensors, like IMUs, GPS, GIS, are also used for estimating the various inertial measurements and real-time positioning of the vehicle on the road.
So, there is no one unique solution that offers good sensing and perception functionality, so multiple such technologies are used in conjunction with each other to offer accurate perception. The sensed data from various sensors are fused together to accurately perceive the driving environment. To localize the vehicle independently, methodologies like odometry, Kalman filters, particle filters, and simultaneous localization and mapping (SLAM) techniques are employed to estimate the state of the vehicle in a driving scenario. Figure 3 graphically illustrates the whole process of sensing, perception, localization, path planning, and vehicle motion control. Various road detection methodologies are given in Table 1.
After the successful completion of perception and localization, the next task is the trajectory or path planning to navigate the vehicle through the traffic. Path planning will influence the decision-making process and is the most important and challenging task. From the sensed data, the vehicle will try to understand the particular driving scenario, whether it is an intersection or a right turn, the states and behavior of the vehicles ahead, the various road signs, collision avoidance, etc. From this perceived information, the vehicle will learn and plan out all the possible trajectories, and using the machine learning models or state models, an inference will be made for navigating the vehicle through the road.
The last step in the process is the motion control of the vehicle. The vehicle motion control system influences the longitudinal and lateral movement of the vehicle, considering its dynamics. It engulfs the control of the steering, braking, and cruise control of the vehicle to assure that it sticks to the desired path on the road safely.
Literature review
One of the main tasks while sensing, perceiving, and localizing the current driving environment is to detect the free (drivable) road, which has been of interest for the last few decades. This visual perception is done in order to detect collision-free space in the driving environment that will aid the advanced driving assistance systems in autonomous decision-making. Road scene segmentation is one of the important computer vision techniques used in autonomous driving. A typical driving scenario may consist of buildings, vehicles, roads, pedestrians, etc., so it is essential to obtain or segment the drivable area from the captured road scene for collision-free navigation [19]. Road detection includes the estimation of the extent of the road, the various lanes and their intersections, splits, and termination points in the diverse driving scenarios. A drivable region is a connected road surface that is not occupied by any obstacles like other vehicles, and people. The objective of road segmentation is to impose geometrical constraints on the various objects that are present in the driving scene [19]. Road segmentation basically allows the generation of an occupancy map of the perceived driving environment and uses this information in the automated driving workflow to navigate safely. Thus, it becomes essential to accurately and efficiently segment the drivable road region from the driving environment. Traditionally, road segmentation is carried out using various computer vision algorithms that employ methodologies such as edge detection and histograms [20]. The key markers that aid humans in perceiving information about the road are color, texture, boundaries, and lane markings, and similar information can be used by driving assistance systems to safely navigate the driving environment. Vision-based perception has been prominent in the development of advanced driving assistance systems and is being coupled with various machine learning algorithms to develop the proof of concept for the SAE stage 2 to stage 3 level of autonomy in self-driving vehicles. But it is very difficult to do so, as road design and conditions vary throughout the globe and are not universally the same, so these computer vision algorithms will not offer universally uniform results.
Over the last few years, the development of full convolutional neural networks (CNNs) for semantic segmentation [21] boosted their adoption in autonomous driving, and the recent advancements in the development of massive or deep convolutional neural networks, like SegNet [22], will aid the driving assistance system in handling several diverse driving scenarios. Several researchers have used deep CNNs for the semantic segmentation of the driving scene. In [23], a DCNN has been reported for obstacle detection and road segmentation. The work proposes the use of a stereo-based approach to build a disparity map for obstacle detection in a driving environment. In [24], two networks, ENet and LaneNet, have been proposed to detect road features, and a weighted combination of the various features has been used for road detection. One CNN works on the detection of the road surface, and the other one is used to detect the lanes, and the output from both is merged to get an accurate and precise representation of the drivable road. A deep recurrent convolutional neural network (U-Net) for road detection and centerline extraction is discussed in [25]. The work involves the development of a novel RCNN unit incorporated into the U-Net framework for road extraction, followed by the multi-task learning scheme that handles both the tasks of road detection and centerline extraction simultaneously. In [26], ResNet-101 has been used for the detection of the road. In [27], a deep NN, road and road boundary network (RBNet), is developed for unified road and road boundary detection simultaneously and eliminates the possibility of a pixel being misclassified as a road or road boundary. In [28], a CNN with gated recurrent units has been proposed for the fast and accurate segmentation of the road and solves the problem of complex computation that is prominent in the conventionally used very deep encoder-decoder structure to fuse pixels for road segmentation. In [29], a DCNN with color lines has been proposed for the segmentation of unmarked roads. The work uses a score-based mechanism to create a conditional random field-based graphical model to segment the road from the background. In [30], CNN, along with distributed LSTM, has been used to segment the road. The network takes a multi-layer feature as input, solves the sequential regression problem, and generates an output of similar width as the input. The network comprises three sections: the first one is a CNN-based local feature encoder, followed by a LSTM-based feature processor, and finally the CNNbased output decoder. Also, recently, with the development of various sensor fusion technologies, deep learning-based multi-modal systems are being developed for autonomous vehicles [1,31,32]. The deep multi-modal detection and classification methodologies sense and fuse data from multiple sensing mechanisms, like mono and stereo vision, LiDAR, RADAR, GPS, and IMU to generate complex features. In [33], a 3D object detection system has been developed by fusing the data sensed from the RGB camera and LiDAR point cloud. By using the fused information, the work predicts 3D bounding boxes, and the network consists of two subnetworks, one meant for 3D object detection and another for multiview feature fusion. Similar work has been reported in [34][35][36][37][38] where the data from the cameras has been fused with LiDAR point clouds for 3D object detection. Some research has also been focused on using multi-spectral camera images [39,40], where the RGB images along with the far-, middle-, and near-infrared images have been used to perceive the multilateral information about the driving scene and for the perception of the depth.
Problem definition
Pavlidis [41] formally defined segmentation as a process of pixel classification in which the input picture is segmented into subsets by assigning the individual pixels to classes. For example, while segmenting a picture by thresholding its gray level, we are actually classifying the pixels into dark and light classes in an attempt to differentiate light objects from dark backgrounds or vice versa. In the literature, it has been reported that deep learning models are enriched with stacked layers (depth), and using these models, one can get high-quality results and that too with great accuracy. These models can utilize the maximum amount of unstructured data.
Semantic segmentation has a promising potential in autonomous driving for developing visual perception systems. The images captured from the various cameras present can be used to develop various driving assistance systems, like road and lane detection systems. Figure 4 shows an example of the road segmentation process. Figure 4a shows the image of the driving environment captured by a camera mounted on the car, and Fig. 4b shows the segmented image containing three classes: (a) the environment, shown
Need for ground detection
In traditional automotive systems, there has been a tradeoff between distance sensitivity and object sensitivity as shown in Fig. 5. When the object is close, its sensitivity is high, allowing for better classification; as the distance increases, the object becomes farther away, potentially leading to poor results. To address good distance and object sensitivities, the current approaches would require too many computational resources. By knowing what and where the ground region in an image is, we can detect both objects and their distances. Also, for autonomous vehicles, it is essential to know about the drivable region in a driving scenario or environment. The proposed system in the present work aims at detecting and segmenting the road area using the KITTI road dataset [42], which will prove valuable in tasks like autonomous driving and navigation systems. For this purpose, "ground" has been defined as a relatively smooth, drivable, and easily distinguishable from the surrounding surface. It may consist of common irregularities or imperfections or differing light conditions. The paper has been organized into the following sections: The "Results and discussion" section sheds light on the state-of-the-art research in the field of the development of advanced driver assistance systems for self-driving cars and the various techniques that are being used for perception and localization tasks. The "Evaluation metrics" section deals with establishing the background for various deep learning models used in the present study and how these can be used for road segmentation purposes. The "Training performance" section deals with the various DNN architectures, methods, and datasets used in the present work. The "Segmentation results" section deals with the segmentation results and the evaluation of the various performance indices, followed by discussions and the scope for future work in the "Discussion" section.
KITTI Vision Benchmark Suite Data Set
The KITTI Vision Benchmark Suite [42] is a dataset designed for object and road/lane detection. The road/lane dataset consists of 289 training and 290 testing images. Each image is 372 × 1242 pixels in size. All the images were acquired on five different days. The dataset is further divided into three categories of road scenes, as can be seen in just the lane where the car is moving. In the current study, the set corresponding to all the road surfaces has been used. These labels are RGB images that color code the road as magenta, non-road areas as red, and left road surfaces as black. The dataset has been pre-processed and augmented according to the input layers of the network before being fed. For ResNet50 and MobileNet-V2, the data set has been resized to 224 × 224 pixels, and for Xception, it has been resized to 299 × 299 pixels.
Methods
The present work is based upon Google's DeepLabV3+ semantic segmentation model, as shown in Fig. 7, and the architecture and weights have been initialized from three different pretrained networks, viz., two high accuracy models of ResNet50 and Xception-Net and one tiny DNN of MobileNet-V2, typically for the edge deployment. All of these networks and models are discussed as follows:
RoadSegNet architecture
The RoadSegNet is built around the cutting-edge DeepLabV3+. DeepLab [43] is an open-source semantic segmentation model designed by Google and works by adding a simple decoder module that helps in segmenting objects along boundaries and also refines the segmentation results. More rapid results are achieved by using depth-wise separable convolution for both Atrous Spatial Pyramid Pooling and the Decoder Module [43]. The weights were initialized using the transfer learning method. The three stateof-the-art DNNs have been utilized. The work considers the use of two high-accuracy models, ResNet50 and XceptionNet, and one tiny DNN, MobileNet-V2. DeepLabV3+ uses an aligned Xception network as its key feature extractor, along with the following modifications: a) The max pool layers are replaced by depth-wise separable convolution and striding. b) Additional batch normalization and ReLU activation are added after each 3 × 3 depth-wise convolution. c) The depth of the model is increased without changing the entry flow network structure.
The encoder works on an output stride, i.e., the ratio of the original image size to the size of the final encoded features. Instead of using bilinear up-sampling with a factor of 16, the encoded features are first unsampled with a factor of 4 and concatenated with corresponding low-level features from the encoder module having the same spatial dimensions. To reduce the number of channels, 1 × 1 convolution is applied before concatenating on the low-level features. After concatenation, a few 3 × 3 convolutions are applied, and the features are unsampled by a factor of 4. This gives the output the same size as the image. The semantics of the proposed RoadSegNet architecture based on DeepLabv3+ are shown in Fig. 8 as below.
Evaluation metrics
To evaluate the efficacy of the obtained segmentation results, the metrics (a) global accuracy, (b) mean accuracy, (c) mean IoU, (d) weighted IoU, and (e) mean BF score have been used. For describing these evaluation metrics, the following terms are used: • True negative (TN): pixels that belong to the background and are correctly classified as such
Accuracy
It can be calculated for each class separately as well as globally for all classes. The accuracy gives the proportion of correctly classified pixels in each class and is given in Eq. 1
Global accuracy
The global accuracy is the ratio of pixels correctly classified to the total number of pixels and is given in Eq. 2
Mean accuracy
The mean accuracy is the ratio of the sum of the accuracy of each class to the number of classes.
Intersection over Union (IoU)
It calculates the incorrect classification of the pixels and is given in Eq. 3. where
Weighted IoU
The weighted IoU is used when there is a disproportionate relationship between the class sizes in the images, minimizing the penalty of the wrong classification in smaller classes. It is given in the equation as follows: where (1) Accuracy =
BF score
It calculates the alignment between predicted borders to the gold standard one. It is given by the harmonic mean of recall and precision as shown in Eq. 4 as: where
Training performance
In the proposed work, the KITTI Road/Lane Detection Evaluation Dataset 2013 [42] has been considered. To accommodate the dataset with the proposed architecture of RoadSeg-Net, the dataset has been preprocessed to meet the requirements of each of the individual deep neural networks of ResNet50, Xception, and MobileNet-V2. The various class labels have been redefined as the environment, the left road, and the right road, and accordingly, the LabelIDs and ColorMaps for the KITTI Road dataset [42] have been modified. For training the various networks, the algorithm-specific learning option of stochastic gradient descent with momentum (sgdm) has been used for all three networks. The initial learning rate has been considered as 0.001, and the maximum number of epochs has been taken as 100 for all the networks. The mini-batch sizes are set according to the GPU specifications, and the rest of the parameters are kept the same. All the models have been trained in MATLAB 2020b environment running on a Windows 10 PC, with Ryzen 9, 12 Core CPU with 16 GB of RAM, and a Nvidia 2060 super 8 GB GPU. Figures 9, 10, and 11 show the plot for the training loss, training accuracy, and base learning rate for all the networks considered for the RoadSegNet, namely, ResNet50, XceptionNet, and MobileNet-V2, respectively. From the plots, it can be observed that the training loss function minimizes as all of these networks achieve good training accuracy of approximately ~ 96 to 97%.
Segmentation results
After training the RoadSegNet, the network is fed with various driving scene images from the KITTI Road Eval Dataset. The segmented images for ResNet50, Xception-Net, and MobileNet-V2 are shown in Table 2. The first six images in each table show the best obtained segmentation results, and the last three images (S. nos. 7 to 9) show the segmentation results for very harsh driving scenarios in heavily shadowed regions where the segmentation becomes quite challenging. The tables also show the plot for intersections over the union (IOU) between the segmented image and the ground truth image. The IOU plots in each table show that the RoadSegNet can detect the drivable road in each driving scenario with high precision, even in very shadowed areas. All the evaluation parameters have been tabulated in Tables 3 and 4. Table 3 Background gives the comparison of the various performance metrics like global and mean accuracy, mean and weighted IOU and mean BF score for the entire training, and testing and validation datasets for each designed network. Table 4 gives the information regarding the accuracy, IOU, and mean BF score for each class, i.e., with what precession a particular class has been detected for the entire training, testing, and validation datasets for each designed network. Figures 12, 13, and 14 show the radar plot for Tables 2, 3, and 4, it can be observed that the developed networks offer very good accuracy, the global accuracy ranges between ~ 96 and 97%, the weighted IOU also spans between ~ 92 and 97%, and the mean BF score too varies between ~ 0.75 and 0.83. It can also be observed from the obtained results that the MobileNet-V2, despite being a tiny deep neural network architecture, offers almost comparable performance with the XceptionNet and, in some cases, offers better performance than the ResNet50.
Discussion
Any autonomous driving system consists of four stages, viz., perception, localization, path planning, and control. The present work is focused on perception tasks. The scope of the work presented in this paper is to build a deep learning-based ground detection system. The results as obtained in the "Segmentation results" section validate the robustness of the system by detecting a significant part of the road, even in the improperly illuminated regions. The left road regions are not detected well due to a smaller number of images being labeled for the region, as can be seen in Tables 2, 3, and 4 (S. no. [7][8][9]. This can be improved by using a dataset with more of these images. The developed framework performs best on bright images, as can be seen in Tables 2, 3, and 4. The work explores the application of three different pretrained networks of ResNet50 and XceptionNet (high accuracy models) and MobileNet-V2 (tiny DNN) typically for edge deployment. It can be observed that the accuracy of MobileNet-V2 is on par with the accuracy of the high-accuracy models of ResNet50 and XceptionNet. With added capabilities like lane detection, depth estimation, and intersection detection, the proposed model can be used for efficient road detection tasks. Although the model performs well in daylight conditions, the capability of the model in nighttime scenarios has not been tested, which still poses a challenge for autonomous vehicles.
In the paper, a comparison has been made between the various state-of-the-art DNNs of ResNet50, XceptionNet, and MobileNet-V2. Table 2 shows qualitatively that the IOU for the trained models provides excellent performance for brightly lit roads as well as in very complex shady conditions. This observation has been established quantitatively in Tables 3 and 4. In Table 3, the metrics of global accuracy for the segmentation have been analyzed, and it is observed that the models offer an accuracy above 97% for the training dataset and above 96% for the test database. Also, the other metrics of mean accuracy, mean IOU, weighted IOU, and mean BF scores have been evaluated for all three DNN models, and these have been established for both the training and the testing datasets. Similarly, in Table 4, the comparison of the class-wise accuracy of the 3 DNNs has been made such that they are able to accurately segment and classify the various classes in the dataset, viz., left road, right road, and environment. The metrics of accuracy, IOU, and mean BF score have been used to evaluate the efficacy of the three DNNs, and the evaluation has been done on the training, testing, and validation datasets, and it can be observed from Table 4 that good results have been obtained. The drivable section in the dataset is the "right road, " and it can be observed that an accuracy of ~ 99% has been obtained for MobileNet-V3, and other two networks also offer an accuracy of about 91% and 97%. Similarly, for the environment, an accuracy of 97% is obtained for MobileNet-V3, and other networks too offer an accuracy of above 97%. Similarly, the performance has been evaluated for the testing as well as the validation dataset. Table 5 presents a comparison of the current work with the work already reported in the literature, and it can be observed that the current work offers one of the highest accuracies and that too in a minimum amount of runtime.
Conclusions
In this study, a deep learning-based autonomous road detection system has been proposed. The proposed framework is built on the DeepLab-V3+ architecture, which is a state-of-the-art semantic segmentation network developed by Google. The weights of the network are initialized by three image classification networks, namely, ResNet-50, MobileNet-V2, and Xception. The results are evaluated on the benchmarked KITTI road dataset. The model is tested for adverse light conditions and general ground complexities, while also achieving significant results on the evaluation metrics. The proposed model also achieves good results on a small and yet powerful network, MobileNet-V2, that can be used in systems that require low power and can be used for edge deployment. | 6,846 | 2022-12-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Adaptive grid‐driven probability hypothesis density filter for multi‐target tracking
National Natural Science Foundation of China, Grant/Award Number: 61305017; Natural Science Foundation of Jiangsu Province, Grant/Award Numbers: BK20181340, BK20130154 Abstract The probability hypothesis density (PHD) filter and its cardinalised version PHD (CPHD) have been demonstratedasa class of promising algorithms for multi‐target tracking (MTT) with unknown,time‐varying number of targets. However, these methods can only be used in MTT systems with some prior information of multipletargets, such asdynamic model, newborn target distribution etc.;otherwise, the tracking performance will decline greatly. To solve this problem,an adaptive Grid‐driven technique is proposed based on the framework of the PHD/CPHD filter to recursively estimate the target states without knowing the dynamic model and the newborn target distribution. The grid size can be adaptively adjusted according to the grid resolution, and the dynamic tendencies of the grids can respond to the unknown dynamic models of each targets, including arbitrary manoeuvring models. The newborn targets outside the grid area can be identified by analysing the measurements, and some new grids are generated around them. The experimental results show that the proposed algorithm has a better performance than the traditional particle filter‐based PHD method in terms of average optimal sub‐pattern assignment distance and average target number estimation for tracking multiple targets with unknown dynamic parameters and unknown newborn target distribution.
| INTRODUCTION
With the increasingly complex war environments, the requirements for multi-target tracking (MTT) techniques have increased and have gained wide attention, especially, for multiple targets with unknown, time-varying numbers. Generally, it is unknowable where the targets appear from and what dynamic models are used, which makes the MTT techniques a hot topic and an extremely difficult problem in the MTT field.
In recent years, different from the conventional data association techniques, the random finite set (RFS) theory [1][2][3] has been proposed as an elegant formulation for MTT and has generated substantial interest due to the development of the probability hypothesis density (PHD) filter [1] and the cardinalised PHD (CPHD) filter [2]. The PHD and CPHD filters can estimate the target states by recursively computing the first-order moment of the multi-target posterior probability distribution, avoiding the combinatorial problem that arises from complex data association. Furthermore, compared with the standard PHD filter, the CPHD filter can dramatically improve the accuracy of individual state estimates and cardinalised estimates due to the extra estimation of the cardinalised distribution. Now, the existing closed-form solutions of PHD and CPHD mainly include the particle filter PHD/ CPHD (PF-PHD/CPHD) [4,5], Gaussian mixture PHD (GM-PHD/CPHD) filter [6] and their modified versions [7][8][9][10]. However, these algorithms can exhibit a good MTT performance only when the model parameters of the tracking systems are knowable as a priori knowledge, such as the newborn intensity of the targets and the dynamic models. However, these parameters are usually unknowable in the real tracking scenarios, resulting in a serious decline in tracking performance.
For the unknown dynamic models in the MTT system, especially for the manoeuvring target tracking, the jump Markov system has proved to be efficient as it switches among a set of candidate models in a Markovian fashion [11,12]. The closed-form solution for the non-linear jump Markov multitarget model is proposed by combining the linear fractional transformation and the unscented transform in [13,14]. In [15], the best-fitting Gaussian approximation approach is employed in the GM-PHD filter with jump Markov models. However, the Gaussian distribution of the PHD is assumed in these algorithms, which limits the scopes of the application. The multiple-model particle PHD/CPHD (MMP-PHD/ CPHD) filter and the MMP-Multi-Bernoulli filter are proposed by implementing the sequential Monte Carlo method and their improved versions are presented in [16][17][18]. For the application of these MM-based filters, the difficulty lies in designing the model sets because the tracking accuracy depends on the matching degree of the prior designed model sets with the real target dynamic models. Moreover, if the process noises are very loud, the targets can also be considered arbitrary manoeuvring targets, and it is difficult for the nonmanoeuvring models to track these targets. The variational Bayesian approximation method [19][20][21] is used to recursively estimate the joint state of multiple targets with process noises [22,23], but in the tracking process, prior information about the newborn target is also needed, and when the target is manoeuvring, the tracking performance will be seriously affected.
In order to solve the above-mentioned problems, an adaptive grid-driven PHD/CPHD (GD-PHD/CPHD) filter algorithm is proposed under the framework of PHD and CPHD. The grid-based filter was first proposed by Bucy and Senne [24], followed by Kramer and Sorenson who further elaborated it [25,26], but it generally uses the fixed grid points for filtering, which affects the filtering efficiency. The proposed algorithm can adaptively adjust the grid position and size according to the grid resolution and the estimated states. The main contributions of the proposed algorithm are as follow: (1) The newborn targets can be identified by analysing the measurements and the grid distribution; (2) the dynamic tendencies of the grids can responds to the unknown dynamic models of each targets; (3) the weight update strategy for the grids is proposed when we compress and expand the grid regions according to the grid resolution.
The remainder of the article is organised as follows: Section 2 summarises the RFS model, the PHD and CPHD filters. Section 3 proposes the GD-PHD/CPHD algorithm and derives the grid particle solution. Simulation results are presented in Section 4. Finally, the conclusions are given in Section 5.
| Random finite set model
For the MTT by using the PHD/CPHD-based filters, the multiple target state set and the measurement set are constructed as the RFSs X k ¼ fx k;1 ; x k;2 ; …; x k;N k g and Z k ¼ fz k;1 ; z k;2 ; …; z k;M k g. N k and M k denote the number of targets and measurements, respectively. Suppose X k−1 is the multiple target state set at time k − 1, then X k and Z k can be expressed as where S kjk−1 ðx Þ is the RFS of targets surviving from time k − 1 to k, B kjk−1 ðx Þ is the RFS of targets spawned from X k−1 and Γ k is the RFS of targets that appear spontaneously at time k. Θ k ðx Þ and K k are the RFSs of measurements originating from the targets in X k and the clutters, respectively. The optimal Bayesian recursions for propagating the multitarget posterior probability density function (PDF) are expressed as shown in [6].
where μ s denotes the approximate state space Lebesgue measure, and p kjk−1 ðX k jY 1:k−1 Þ and p kjk ðX k jY 1:k Þ are the predicted PDF and the posterior PDF, respectively. f kjk−1 ð⋅Þ is the state transition PDF and g k ð⋅Þ is the measurement likelihood function. Notice: Generally, the dynamic model f kjk−1 ð⋅Þ and the process noise in the real scenarios are unknowable, that is, it is difficult to obtain them as a priori knowledge. Therefore, it is difficult to choose the right dynamic models for tracking the targets with arbitrary motions, resulting in the inaccurate estimations for the posterior PDF of the multitarget states. In the proposed algorithm, we will solve this problem by using the GD technique with grid extension. It is noteworthy that the grid extension can cover the targets with arbitrary motions.
| PHD filter
The PHD filter mainly includes two parts, that is, the prediction and the update. Assume D k−1jk−1 denotes the PHD of multiple targets at the time k − 1. The PHD recursive process can be expressed as follows [6]: where ϒ k ðx Þ denotes the intensity of the newborn target RFS at time k and the transition kernel ϕðx jx k−1 Þ is expressed as where β kjk−1 is the intensity of the spawned target RFS at time k − 1, P S;k ðx k−1 Þ is the survival probability of target x k−1 , and Pðx jx k−1 Þ is the state transition PDF of each target.
| PHD update
where λ k is the Poisson parameter used to represent the expected number of false alarms and c k ðzÞ is the probability distribution of clutters in the observation space. 〈⋅; ⋅〉 and ψ k;z ð⋅Þ are expressed as where g k ðzjx Þ is the measurement likelihood function and P D;k ðx Þ is the detection probability.
Notice:
In the traditional PHD-based algorithms, the newborn intensity is usually assumed as a priori knowledge. However, it is unknowable in the real scenarios, that is, ϒ k ðx Þ is unknowable in Equation (5). Due to this the newborn targets cannot be estimated, decreasing the tracking accuracy. We will adaptively identify the newborn targets from the measurements in the proposed grid driven PHD (GD-PHD) algorithm.
| CPHD filter principle
The CPHD filter is a generalisation of the PHD recursion, which jointly propagates the intensity function and the cardinalised distribution. Compared with the PHD filter, the CPHD filter can dramatically improve the tracking accuracy and the number estimations. Assume D kjk−1 ðx Þ and p kjk−1 ðnÞ denote the multi-target intensity function and cardinalised distribution associated with predicted multitarget states at time k − 1, and D k ðx Þ and p k ðnÞ denote the multi-target posterior intensity function and the cardinalised distribution at time k, respectively. The prediction and updation steps of the CPHD method are briefly summarised as follows [5]:
| CPHD prediction
The predicted intensity D kjk−1 ðx Þ is the same as Equation (5), and the prediction of the cardinalised distribution is expressed as where p Γ;k denotes the cardinalised distribution of the newborn targets RFS, and 〈⋅; ⋅〉 denote the inner product operation. C l j ¼ l!=j!ðl − jÞ! denotes the binomial coefficient.
| CPHD update
The updated intensity D k ðx Þ and the updated cardinalised distribution are expressed as where P n j ¼ n!=ðn − jÞ! denotes the permutation coefficient, jZ j denotes the cardinality of the measurement set, p K;k denotes the cardinalised distribution of the clutter random set, and Z k nfzg denotes the remaining measurements of Z k by deleting the measurements z.
where e j is a j-order elementary symmetric function.
| GRID-DRIVEN PHD/CPHD FILTERING ALGORITHM
For the unknown dynamic model (e.g. movement parameters and process noise) and the unknown distributions of the newborn targets in the real MTT scenarios, the GD technique is proposed under the framework of the PHD and CPHD filters. The block diagram of the proposed algorithm is shown in Figure 1. First, the tracking area is uniformly divided into some small grids, which can be considered as grids with equal weights that are distributed on the tracking area at regular intervals. Then the arriving measurements are used to update the grid weights by computing the likelihoods between the grids and the measurements, and subsequently, to compress the grids and extract the target states according to the weights. The grid points are similar to the sample particles of the particle filter. The closer the grid point is to the target, the greater its weight and vice versa. Thus, some of the grids with small weights can be deleted to compress the grids. Finally, we expand the grid area according to the maximum speed of the target and re-divide the extension area as some predicted grids, according to the obtained grid resolutions, and their weights can be associatively calculated with the previous grids by using the kernel-based weight interpolation technique. The newborn targets and the clutters can be identified from the measurements by analysing the association information between the measurements and the survival grid area. If the measurements are not associated with the previous grids, they are considered as originating from the newborn targets or clutters.
| Grid initialisation
Assume that the observation area is a rectangular area. At the initial time k ¼ 0, the observation area is uniformly divided into N 0 grids with the vertical resolution d 0 α and the horizontal resolution d 0 β : The uppermost edge and the lowest edge coordinates are α 0 max and α 0 min . The far right and the far left edge coordinates are β 0 max and β 0 min . The initial grid set is expressed coordinate. Assume that at the initial time, there are M 0 targets with the same number as that of the initial measurement set i¼1 , jZ 1 j ¼ M 0 denotes the cardinality of Z 1 , and then we set the initial grid weights as
| Initial target state extraction
At time k ¼ 1, the initial measurement set Z k is used to update the weight of each grid, for i ¼ 1; 2; …; N 0 .
where K k ðzÞ ¼ λ k c k ðzÞ is the intensity of the clutter RFS, λ k is the clutter number obeying the Poisson distribution, and c k ðzÞ is the PDF of the clutters.
Then the number of targets can be estimated bŷ where roundð⋅Þ denotes the rounding-off operator. According to the estimated numberM k of the targets and their corresponding weights, the grids are clustered intoM k clusters number of the grids belonging to the i th cluster and g i;j k denotes the j th grid in the i th cluster. Figure 2 shows the clusters in which the red grids represent the grid clusters and the green ellipses in the clusters denote the grid area with high probability of the target state space.
The target states can be extracted by the weighted sum of the grids for each cluster expressed as
| Grid shrinkage
Grid shrinkage is the pruning of the grids by deleting the grids with small weights. We arrange the weights w i;j k in descending order for each grid in the culsters, and the first L i k grids with larger weights are retained for subsequent filtering by deleting the grids with small weights. It is noted that the sum of weights for the deleted grids should be smaller than a threshold (e.g. 5% of all the weights sum in a cluster). The grids retained can Assume that the resolution d k α and d k β of the vertical and horizontal directions of the grids are equal and can be updated according to the measurement standard deviation σ k .
where η is a scale coefficient.
| Grid expansion and weight redistribution
The purpose of the grid expansion is to extend the grid area for each target to cover its measurement for the next time, which can keep the target identified and tracked for the next time. Suppose that the maximum speed of the targets is v k and the minimum number of grids is set as E, and update the uppermost edge coordinate α k;i max and the lowest edge coordinate α k;i min in the vertical direction for each cluster, i ¼ 1; 2; …;M k , that is, Use the same method to update the far right edge coordinates β k;i max and the far left edge β k;i min for each cluster; i ¼ 1; 2; …;M k , that is, The edge coordinates construct a box area for each cluster, and then we uniformly divide the box area intoÑ , where the weightsw i;j k of the new grids need to be redistributed according to the original weights of the L i k grids due to the number difference of the grids before and after grid expansion. We propose to redistribute the weightsw i;j k by weight interpolation calculation based on the kernel method, that is, where λ denotes the scale factor, and the weights are flattened and normalised according to Equations (34) and (35), where a < 1. Figure 3 gives an example of shrunk grids and expanded grids of two targets (clusters). In Figure 3a, the red grids including the blue grid area are the initial grids of the targets, and the blue grids represent the shrunk grids of the red grid area. In Figure 3b, the blue grids including the red grids are the expanded grids of the red grids.
| Newborn target identification
Assume that the estimated grid set G k at time k has been obtained. Then the current measurements corresponding to the survival (estimated) targets can be identified by matching the estimated grids, that is, if the measurements scatter in the grid area, they are considered as the measurements fz i k g jZ 1 k j i¼1 of the survival targets; otherwise, they are considered as the measurements fz j k g jZ 2 k j j¼1 of the newborn targets and/or clutters. Therefore, the measurement set is divided into two parts, that is, to generate the newborn grids. First to extract the box area with the centre of each measurement of Z 2 k , the width and the height of each box area are set as 2 times the maximum speed v, and then the box area is uniformly divided into some small grids by the same vertical direction resolution d k α and the horizontal direction resolution d k β as that of the previous grids. Finally, we can obtain the new grids expressed as 3.1.6 | Update grid weights and extract target states At time k þ 1, the measurement set Z kþ1 is used to update the weight of each grid in G k and G new;kþ1 according to Equation (37), that is, Then the number of the targets can be estimated bŷ According to the estimated numberM kþ1 of targets and their corresponding weights, the grids are clustered intoM kþ1 clusters . Notice that the k-means method is implemented to get the clusters. The target states can be extracted by the weighted sum of the grids for each cluster, expressed as It is noted that if there is no measurement at time k þ 1 associated with the newborn target at time k, after updating, the newborn target will be deleted as a clutter. Figure 4 shows an example of the process of newborn target recognition. In Figure 4a, there are 2 survival clusters (the red grids) at time k in the tracking area, and below the survival targets there are 2 purple grids generated by the unknown newborn targets or clutters. While at time k þ 1 in Figure 4b, when the new measurements arrive, the grid weights can be obtained by calculating the likelihood between the grids and the measurements. Then the grids with small weights that originated from the clutter are removed, and the grids originating from the newborn are kept as the survival target grids.
The steps of the GD-PHD algorithm are summarised in Table 1. Steps 3, 4 and 5 can be considered as the prediction stage of the traditional PHD filtering, and Step 6 belongs to the update stage.
Step 7 is used to judge whether the tracking is terminated or not.
| Grid-driven CPHD algorithm
The GD-CPHD algorithm is similar to the GD-PHD algorithm in Section 3.1. The advantages of the CPHD algorithm are mainly reflected in the cardinalised distribution estimation of the targets, which improves the tracking accuracy of the algorithm. The steps of the GD-CPHD algorithm are briefly described as follows: The grid initialisation is the same as that in Section 3.1, and then when the initial measurement set is arrived at time k ¼ 1, the grid weights can be updated by where the initial intensity D 0 can be approximated by the initial grids as D 0 ðgÞ ¼ . p 0 is the initial cardinalised distribution which is also assumed to equal the cardinality of (1) Grid initialisation, (2) Extract initial target state set X k according to Equation (21). the initial measurement set, that is, the estimated number of target isM k ¼ M 0 , and the grids are clustered intoM k clusters according to the weights and expressed as G k ¼ fG i k ¼ fg where N i k denotes the number of the grids belonging to the i th cluster and g i;j k denotes the j th grid in the i th cluster. The target states can be extracted according to Equation (39) by the weighted sum of the grids for each cluster.
Subsequently, the steps implemented for grid shrinkage, grid expansion, weight redistribution and newborn target identification are the same as those implemented for GD-PHD. After these steps, the grids when the latest measurements Z kþ1 arrive, that is, The same method can be used to update the weights of the newborn grids, where the intensity function D k can be expressed as The number estimation of the targets can be calculated bŷ According to the estimated numberM kþ1 of targets and their corresponding weights, the grids are clustered intoM kþ1 . The target states can be extracted by the same method as that of GD-PHD. If the tracking does not end, we can jump to the grid shrinkage step; otherwise, we terminate the tracking. the newborn intensity for the PF-PHD and MM-PHD algorithms is Γ ðiÞ k ðxÞ ¼ where i ¼ 1; 2; 3; 4, ðm ðiÞ Γ ; p ðiÞ Γ Þ denotes the newborn component parameter of the i th target. The experiment was performed on an ASUS PC with Core i5-7300 processor and 16 GB memory using the MATLAB 2018 simulation software. The optimal sub-pattern assignment (OSPA) metric [27] is employed to evaluate the state estimate precision of each algorithm with cut-off parameter c ¼ 100 and order parameter p ¼ 2. Moreover, the average number estimates and their root mean square error (RMSE) are used for performance evaluation of the number estimates. The simulation results are obtained from Monte Carlo experiments of 100 ensemble runs.
| Cross multi-target tracking scenario
In this experiment, assume that there are three targets making a crossing motion. Targets Figure 5. The clutter is modelled as a Poisson distribution with the average clutter rate r ¼ 3 over the observation space. The variance of the process noise is assumed as σ 2 v ¼ 0:8 m 2 s −3 . The survival probability and detection probability of the target are set as P S;k ¼ 0:99 and P D;k ¼ 0:98, respectively. It is assumed that the number of particles for the PF-PHD filter is 1500, and the experimental results are shown in Figures 6, 7 and 8. Figure 6 shows the comparison of the OSPA distances of the proposed GD-PHD and GD-CPHD filters and the traditional PF-PHD filter. It is clear that the tracking accuracy of the proposed algorithms, GD-PHD and GD-CPHD, is significantly better than the traditional PF-PHD filter, and the proposed algorithm, relative to the PF-PHD filter, can adaptively identify newborn targets. It does not need a priori information of newborn targets and has a better application ability for unknown scenes. In addition, since the proposed algorithm GD-CPHD has a better cardinalised distribution estimation ability, it is obviously superior to the GD-PHD filtering algorithm in estimation accuracy. It is noted that when the third target disappears at the 50th second, the OSPA distance of the GD-CPHD algorithm is higher than that of the other two algorithms. The reason is that the missed detection problem [17] has been cured in the CPHD-based method, which is beneficial when missed detection really occurs but is harmful when targets really disappear.
F I G U R E 5 Real target trajectories
F I G U R E 6 Average optimal sub-pattern assignment (OSPA) distance for each time. CPHD, cardinalised PHD; GD-PHD, grid driven PHD; PF-PHD, particle filter PHD; PHD, probability hypothesis density F I G U R E 7 Average number estimates of targets for each time. CPHD, cardinalised PHD; GD-PHD, grid driven PHD; PF-PHD, particle filter PHD; PHD, probability hypothesis density YANG ET AL. Figure 7 shows the cardinalised estimation of the proposed GD-PHD and GD-CPHD filters and the traditional PF-PHD filter, and Figure 8 shows the average number estimates of targets for the different algorithms. As can be seen, the proposed algorithms are superior to the PF-PHD algorithm in cardinalised estimation.
In order to further verify the stability of the proposed algorithm, Table 2 gives a comparison of the tracking accuracy of different algorithms with different process noises. It is clear that the proposed algorithms, GD-PHD and GD-CPHD, are not sensitive to the process noise, while PF-PHD is sensitive. As the process noise increases, the average OSPA distance of PF-PHD also increases significantly.
| Manoeuvring a multi-target tracking scenario
Suppose there are four manoeuvring targets in the experimental scenario. The manoeuvring parameters of the turn rate are set as ω ¼ �0:1 rad=s, and the initial positions of the targets 1 and 2 are ð10; 30Þ and ð−55; −30Þm: Target 3 starts moving at the 20 th second and disappears at the 50 th second. Target 4 starts moving at the 10 th second and disappears at the 60 th second. Their initial states and covariance can be described as m Figure 9. It is noted that the proposed algorithm can track the targets with arbitrary motion trajectories, since the proposed algorithm is not disturbed by the motion model and unknown newborn targets. The clutter is modelled as a Poisson distribution with an average rate r ¼ 3 in the observation space. The variance of the process noise is assumed as σ 2 v ¼ 0:01 m 2 s −3 . The probabilities of survival and detection of the targets are P S;k ¼ 0:99 and P D;k ¼ 0:98, respectively. 1500 particles are implemented in the PF-PHD algorithm. For the MM-PHD algorithm, assume the model set includes three different models, that is, CV and CT with turning rate þω and −ω, respectively. For testing the different tracking performance of the MM-PHD, we set the turning rate ω as �0:01, �0:05, �0:1, �0:2, and �0:5 for different model combinations. The experimental results are shown in Figures 10, 11 and 12, and Tables 3 and 4. Figure 10 shows the OSPA distances of GD-PHD, GD-CPHD, MM-PHD and PF-PHD. It is clear that the proposed GD-PHD and GD-CPHD algorithms have a higher tracking accuracy than PF-PHD. For MM-PHD, when the model manoeuvre parameters are accurate (e.g. ω = �0:1), the tracking accuracy of the model is higher than that of the others; moreover, the tracking accuracy is also affected by the mismatched models in the model set. If the models cannot be matched with the real manoeuvring models, the tracking accuracy will decrease. Therefore, the proposed GD-PHD and GD-CPHD algorithms have a better tracking performance than the MM-PHD algorithm without having prior information about the dynamic model. In addition, the proposed algorithms also do not require a priori information of the newborn targets due to their good adaptive capability for arbitrary MTT. Compared to the GD-PHD, the GD-CPHD Abbreviations: CPHD, cardinalised PHD; GD-PHD, grid driven PHD; OSPA, optimal sub-pattern assignment; PF-PHD, particle filter PHD; PHD, probability hypothesis density.
F I G U R E 9
Manoeuvring target trajectory 10has a higher accuracy due to its additional estimation of the cardinalised distribution. Figure 11 shows the cardinalised estimation of the proposed GD-PHD and GD-CPHD algorithms and the MM-PHD and PF-PHD filters. Figure 12 shows the RMSE of the number estimation. As can be seen, the proposed algorithm is also significantly better at cardinalised estimation than the MM-PHD and PF-PHD algorithms. For the PF-PHD and MM-PHD algorithms, some targets are missed estimations due to the mismatched models when the targets make a manoeuvring motion; therefore, the RMSEs of the two methods are higher than the proposed methods. Table 3 shows a comparison of the running time of the different algorithms. The computational cost of GD-CPHD is higher than that of GD-PHD due to the extra calculation required for cardinalised distribution estimation, but it is lower than that of MM-PHD that involves interactive operation of multiple models. The PF-PHD method has the fastest operation speed. This is there is no extra computation of model interaction and/or cardinalised distribution estimation in the PF-PHD filter. However, it has the worst tracking performance for manoeuvring target tracking compared to the other algorithms because only the linear model is used in the PF-PHD filter, and it does not have the tracking capability for tracking manoeuvring targets. Abbreviations: CPHD, cardinalised PHD; GD-PHD, grid driven PHD; OSPA, optimal sub-pattern assignment; PHD, probability hypothesis density.
YANG ET AL.
In order to further verify the stability of the proposed algorithm, Table 4 shows a comparison of the tracking accuracy for the GD-PHD and GD-CPHD algorithms under a maneuvering scenario with different process noises. It is clear that the average OSPA error is stable without big oscillations for different process noises, demonstrating that the proposed algorithms, GD-PHD and GD-CPHD, are not sensitive to process noise, with a good tracking performance. Table 5 shows the performance of the proposed methods under different clutter levels. The clutter rates are set as r ¼ 3; 5; 10; 20, σ 2 v ¼ 0:01 m 2 s −3 , and P D;k ¼ 0:98. Table 6 shows the performance of the proposed methods under different detection probabilities. The detection probabilities are set as P D;k ¼ 0:98; 0:9; 0:8; 0:7 and σ 2 v ¼ 0:01 m 2 s −3 , r ¼ 3. As can be seen, although the clutter rate increases dramatically or the detection probability decreases sharply, the average OSPA distances of GD-PHD and GD-CPHD increase slightly, which demonstrates that the proposed algorithms, GD-PHD and GD-CPHD, are also not sensitive to the different clutter rates and the different detection probabilities.
| CONCLUSIONS
In order to overcome the shortcomings of traditional PHD and CPHD filtering algorithms for MTT, with an unknown dynamic model and the newborn target distribution, improved GD algorithms are proposed, that is, the GD-PHD and GD-CPHD filtering algorithms, which can adaptively adjust the position and the size of the grids and identify newborn targets according to the measurements and the grid resolution. The dynamic tendency of the grids through the shrinkage and expansion operator can respond to the unknown arbitrary dynamic models. Experimental results show that the proposed algorithms have a better tracking performance than the traditional PF-PHD and MM-PHD filtering algorithms for arbitrary motion targets, and they do not require a priori intensity information for unknown newborn targets. | 7,096.6 | 2021-06-22T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
The Temple University Hospital EEG Data Corpus
The electroencephalogram (EEG) is an excellent tool for probing neural function, both in clinical and research environments, due to its low cost, non-invasive nature, and pervasiveness. In the clinic, the EEG is the standard test for diagnosing and characterizing epilepsy and stroke, as well as a host of other trauma and pathology related conditions (Tatum et al., 2007; Yamada and Meng, 2009). In research laboratories, EEG is used to study neural responses to external stimuli, motor planning and execution, and brain-computer interfaces (Lebedev and Nicolelis, 2006; Wang et al., 2013). While human interpretation is still the gold standard for EEG analysis in the clinic, a host of software tools exist to facilitate the process or to make predictive analyses such as seizure prediction.
Recently, a confluence of events has underscored the need for robust EEG tools. First, there has been a renewed push via the White House BRAIN initiative to understand neural function and disease (Weiss, 2013). Secondly, there is an increased awareness on brain injury owing to both the influx of injured warfighters and numerous high-profile athletes found to have chronic brain damage (McKee et al., 2009; Stern et al., 2011). And thirdly, a wave of consumer grade scalp sensors has entered the market, allowing end users to monitor sleep, arousal, and mood (Liao et al., 2012).
In all these applications, there is a need for robust signal processing tools to analyze the EEG data. Historically, EEG signal processing tools have been devised using either ad hoc heuristic methods, or by training pattern recognition engines on small data sets (Gotman, 1982). These methods have yielded limited results, owing mostly to the fact that brain signals (and EEG in particular) are characterized by great variability, which can only be properly interpreted by building statistical models using massive amounts of data (Alotaiby et al., 2014; Ramgopal et al., 2014). Unfortunately, despite EEG being perhaps the most pervasive modality for acquiring brain signals, there is a severe lack of data in the public domain. For example, the “EEG Motor Movement/Imagery Dataset” (http://www.physionet.org/pn4/eegmmidb/) contains ~1500 recordings of 1 or 2 min duration apiece from 109 subjects (Goldberger et al., 2000; Schalk et al., 2004). The CHB-MIT database contains data from 22 subjects, mostly pediatric (Shoeb, 2009). A database from Karunya University contains 175 16-channel EEGs of duration 10 s (Selvaraj et al., 2014). One of the most extensive databases for supporting epilepsy research is the European Epilepsy Database (http://epilepsy-database.eu/), which contains 250 datasets from 30 unique patients, but sells for €3000. Other databases, such as ieee.org, contain a wealth of data from more invasive modalities such as electrocorticogram, but little or no EEG.
This lack of publically available data is ironic considering that hundreds of thousands of EEGs are administered annually in clinical settings around the world. Relatively little of this data is publicly available to the research community in a form that is useful to machine learning research. Massive amounts of EEG data would allow the use of state-of-the-art machine learning algorithms to discover new diagnostics and validate clinical practice. Furthermore, it is desirable that such data be collected in clinical settings, as opposed to tightly controlled research environments, since “clinical-grade” data is inherently more variable with respect to parameters such as electrode location, clinical environment, equipment, and noise. Capturing this variability is critical to the development of robust, high performance technology that has real-world impact.
In this work, we describe a new corpus, the TUH-EEG Corpus, which is an ongoing data collection effort that has recently released 14 years of clinical EEG data collected at Temple University Hospital. The records have been curated, organized, and paired with textual clinician reports that describe the patients and scans. The corpus is publicly available from the Neural Engineering Data Consortium (www.nedcdata.org) (Picone and Obeid, 2016).
INTRODUCTION
The electroencephalogram (EEG) is an excellent tool for probing neural function, both in clinical and research environments, due to its low cost, non-invasive nature, and pervasiveness. In the clinic, the EEG is the standard test for diagnosing and characterizing epilepsy and stroke, as well as a host of other trauma and pathology related conditions (Tatum et al., 2007;Yamada and Meng, 2009). In research laboratories, EEG is used to study neural responses to external stimuli, motor planning and execution, and brain-computer interfaces (Lebedev and Nicolelis, 2006;Wang et al., 2013). While human interpretation is still the gold standard for EEG analysis in the clinic, a host of software tools exist to facilitate the process or to make predictive analyses such as seizure prediction.
Recently, a confluence of events has underscored the need for robust EEG tools. First, there has been a renewed push via the White House BRAIN initiative to understand neural function and disease (Weiss, 2013). Secondly, there is an increased awareness on brain injury owing to both the influx of injured warfighters and numerous high-profile athletes found to have chronic brain damage (McKee et al., 2009;Stern et al., 2011). And thirdly, a wave of consumer grade scalp sensors has entered the market, allowing end users to monitor sleep, arousal, and mood (Liao et al., 2012).
In all these applications, there is a need for robust signal processing tools to analyze the EEG data. Historically, EEG signal processing tools have been devised using either ad hoc heuristic methods, or by training pattern recognition engines on small data sets (Gotman, 1982). These methods have yielded limited results, owing mostly to the fact that brain signals (and EEG in particular) are characterized by great variability, which can only be properly interpreted by building statistical models using massive amounts of data (Alotaiby et al., 2014;Ramgopal et al., 2014). Unfortunately, despite EEG being perhaps the most pervasive modality for acquiring brain signals, there is a severe lack of data in the public domain. For example, the "EEG Motor Movement/Imagery Dataset" (http://www.physionet.org/pn4/eegmmidb/) contains ∼1500 recordings of 1 or 2 min duration apiece from 109 subjects (Goldberger et al., 2000;Schalk et al., 2004). The CHB-MIT database contains data from 22 subjects, mostly pediatric (Shoeb, 2009). A database from Karunya University contains 175 16-channel EEGs of duration 10 s (Selvaraj et al., 2014). One of the most extensive databases for supporting epilepsy research is the European Epilepsy Database (http://epilepsy-database.eu/), which contains 250 datasets from 30 unique patients, but sells for e3000. Other databases, such as ieee.org, contain a wealth of data from more invasive modalities such as electrocorticogram, but little or no EEG.
This lack of publically available data is ironic considering that hundreds of thousands of EEGs are administered annually in clinical settings around the world. Relatively little of this data is publicly available to the research community in a form that is useful to machine learning research. Massive amounts of EEG data would allow the use of state-of-the-art machine learning algorithms to discover new diagnostics and validate clinical practice. Furthermore, it is desirable that such data be collected in clinical settings, as opposed to tightly controlled research environments, since "clinical-grade" data is inherently more variable with respect to parameters such as electrode location, clinical environment, equipment, and noise. Capturing this variability is critical to the development of robust, high performance technology that has real-world impact.
In this work, we describe a new corpus, the TUH-EEG Corpus, which is an ongoing data collection effort that has recently released 14 years of clinical EEG data collected at Temple University Hospital. The records have been curated, organized, and paired with textual clinician reports that describe the patients and scans. The corpus is publicly available from the Neural Engineering Data Consortium (www.nedcdata.org) (Picone and Obeid, 2016).
METHODS
Clinical EEG data were collected from archival records at Temple University Hospital (TUH). All work was performed in accordance with the Declaration of Helsinki and with the full approval of the Temple University IRB. All personnel in contact with privileged patient information were fully trained on patient privacy and were certified by the Temple IRB.
Archival EEG signal data were recovered from CD-ROMs. Files were converted from their native proprietary file format (Nicolet's NicVue) to an open format EDF standard. Data was then rigorously de-identified to conform to the HIPAA Privacy Rule by eliminating 18 potential identifiers including patient names and dates of birth. Patient medical record numbers were replaced with randomized database identifiers, with a key to that mapping being saved to a secure off-line location. Importantly, our process captured instances in which the same patient received multiple EEGs over time and assigned database IDs accordingly. Data de-identification was performed by combining automated custom-designed software tools with manual editing and proofreading. All storage and manipulation of source files was conducted on dedicated non-network connected computers that were physically located within the TUH Department of Neurology.
We also manually paired each retrieved EEG with its corresponding clinician report. These reports are generated by FIGURE 1 | Directory and file structure of the TUH-EEG database. Data is organized by patient (orange) and then by session (yellow). Each session contains one or more signal (edf) and physician report (txt) files. To accommodate file system management issues, patients are grouped into sets of about 100 (blue). the neurologist after analyzing the EEG scan and are the official hospital summary of the clinical impression. These reports are comprised of unstructured text that describes the patient, relevant history, medications, and clinical impression. Reports were mined from the hospital's central electronic medical records archives and typically consisted of image scans of printed reports. Various levels of image processing were employed to improve the image quality before applying optical character recognition (OCR) to convert the images into text. A combination of software and manual editing was used to scrub protected health information (PHI) from the reports and to correct errors in OCR transcription. Only sessions with both an EEG and a corresponding clinician report were included in the final corpus.
The corpus was defined with a hierarchical Unix-style filetree structure. The top folder, edf, contains 109 numbered folders, each of which contain numbered folders for up to 100 patients. Each of these patient folders contains sub-folders that correspond to individual recording sessions. Those folder names reflect the session number and date of recording. Finally, each session folder includes one or more EEG (.edf) data files as well as the clinician report in .txt format. Figure 1 summarizes the corpus file structure and gives examples of text and signal data.
RESULTS
The completed corpus comprises 16,986 sessions from 10,874 unique subjects. Each of these sessions contains at least one EDF file (more in the case of long term monitoring sessions that were broken into multiple files) and one physician report. Corpus metrics are summarized in Figure 2. Subjects were 51% female and ranged in age from less than 1 year to over 90 (average 51.6, stdev 55.9; see Figure 2 bottom left). The average number of sessions per patient was 1.56, although as many as 37 EEGs were There was a substantial degree of variability with respect to the number of channels included in the corpus (see Figure 2 bottom right). EDF files typically contained both EEG-specific channels as well as supplementary channels such as detected bursts, EKG, EMG, and photic stimuli. The most common number of EEGonly channels per EDF file was 31, although there were cases with as few as 20. A majority of the EEG data was sampled at 250 Hz (87%) with the remaining data being sampled at 256 Hz (8.3%), 400 Hz (3.8%), and 512 Hz (1%).
An initial analysis of the physician reports reveals a wide range of medications and medical conditions. Unsurprisingly, the most common listed medications were anti-convulsants such as Keppra and Dilantin, as well as blood thinners such as Lovenox and heparin. Approximately 87% of the reports included the text string "epilep, " and about 12% included "stroke." Only 48 total reports included the string "concus." The TUH-EEG corpus v0.6.0 has been released and is freely available online at www.nedcdata.org. Users must register with a valid email address. The uncompressed EDF files and reports together comprise 572 GB. For convenience, the website stores all data from each patient as individual gzip files with a median filesize of 4.1 MB; all 10,874 gzips together comprise 330GB. Users wanting to access the entire database are encouraged to physically mail a USB hard drive to the authors in order to avoid the downloading process.
DISCUSSION
This work presents the world's largest publically available corpus of clinical EEG data, representing a grand total of 29.1 years (total duration summed over all EEG channels) of EEG data. In addition to its size, this corpus features a wide variation of patient ages, diagnoses, medications, channel counts, and sampling rates. Furthermore, the corpus continues to be expanded at a rate of ∼2500 new sessions per year.
Biomedicine is entering a new age of data-driven discovery driven by ubiquitous computing power, inexpensive data storage, the machine learning revolution, and high speed internet connections. Access to massive quantities of properly curated data is now the critical bottleneck to advancement in many areas of biomedical research. Ironically, doctors and clinicians generate enormous quantities of data every day, but that information is almost exclusively sequestered in secure archives where it cannot be used for research by the biomedical research community. The quantity, quality, and variability of such data represent a significant unrealized potential, which is doubly unfortunate considering that the cost of generating that data has already been borne. Although, there has been some advancement with respect to publishing databases of patient metadata, curated signal databases are much less commonly available, especially in quantities that would be sufficient to train most contemporary machine learning engines.
In this work, we have endeavored to achieve two goals. The first is to create a corpus of clinical EEG signals and their corresponding physician reports. The second is to establish best practices for the curation and publication of clinical signal data, which is an inherently different entity than discrete metadata. The EEG corpus we present here is the first of its kind, both in terms of volume and heterogeneity, both of which are critical factors for training machine learning engines. Typically, "research-grade" data is created by tightly controlling as many external factors as possible. In contrast, "clinical-grade" data is inherently heterogeneous with respect to those same external factors. Whereas certain classes of research questions can only be answered using well-controlled data, others benefit from variability. For example, an epilepsy detection algorithm that is trained using 31 specific EEG channels may not be effective if one or more of those channels are not connected, or if the electrodes are improperly located or affixed to the scalp. Algorithms that must be sufficiently robust to function under a plurality of conditions must be trained with data that is sufficiently heterogeneous.
Our work has shown that, although clinical signal data is ubiquitous and inherently valuable to the research community, it requires substantial manipulation before it can be released as an adequately curated data corpus. This effort is non-trivial, both in terms of time and cost. Our team's activities ranged from the mundane (e.g., manually copying archival hospital data from over 1500 CD-ROMs) to more technical challenges (e.g., developing software for detecting data entry errors in the clinical records). Physician reports had to be located through one of five different EMR portals, often manually. A battery of tests was created to validate that each record was complete, unique, errorfree, and completely free of privileged patient information. A rigorous accounting system was created to track and organize the tens of thousands of files and their status.
The cost to develop the TUH EEG Corpus has been relatively low, totaling less than $100 K in direct charges. As medical record technology improves, the cost of this collection can be reduced even further. On the balance, these types of large-scale collections are a worthwhile investment, since costs are minor relative to the cost of acquiring the data or conducting research on the data. In general, the authors expect that a dedicated community-wide data facility would be best suited to curate data of the magnitude and complexity described here because there are significant ongoing costs associated with such an activity.
An example of these on-going costs is annotation of the dataa critical issue for machine learning research. In most semisupervised machine learning applications, one of the first steps is to annotate the data, a process in which important elements of the signal are marked as such. This can be performed either manually by a human domain expert, or automatically with a bootstrapstyle algorithm. In addition to the EEG data itself, we are releasing a collection of annotations which may be downloaded separately if they are of interest to the user. The annotations contain the start and stop time and an event label and are specific to each channel. Six classes of events are included: (1) spike and/or sharp waves (SPSW), (2) periodic lateralized epileptiform discharges (PLED), and (3) generalized periodic epileptiform discharges (GPED). SPSW events are epileptiform transients that are typically observed in patients with epilepsy. PLED events are indicative of EEG abnormalities and often manifest themselves with repetitive spike or sharp wave discharges that can be focal or lateralized over one hemisphere. These signals display quasi-periodic behavior. GPED events are similar to PLEDs, and manifest themselves as periodic short-interval diffuse discharges, periodic long-interval diffuse discharges and suppression-burst patterns according to the interval between the discharges. Triphasic waves, which manifest themselves as diffuse and bilaterally synchronous spikes with bifrontal predominance, typically at a rate of 1-2 Hz, are also included in this class.
Three events are used to model background noise: (1) artifacts (ARTF) are recorded electrical activity that is not of cerebral origin, such as those due to the equipment, patient behavior or the environment; (2) eye movement (EYEM) are common events that can often be confused with a spike; (3) background (BCKG) is used for all other signals.
These six classes (three signal classes and three noise classes) were arrived at through several iterations of a study conducted with Temple University Hospital neurologists. Automatic labeling of these events allows a neurologist to rapidly search long-term EEG recordings for anomalous behavior. However, there are many more annotations that need to be developed for this data. For example, we are currently developing technology to automatically annotate seizures. There are many other events of interest that need annotation (e.g., sleep states). We expect to be continually enhancing the value of the TUH EEG Corpus. | 4,306.4 | 2016-05-13T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Trends in Articular Cartilage Tissue Engineering: 3D Mesenchymal Stem Cell Sheets as Candidates for Engineered Hyaline-Like Cartilage
Articular cartilage defects represent an inciting factor for future osteoarthritis (OA) and degenerative joint disease progression. Despite multiple clinically available therapies that succeed in providing short term pain reduction and restoration of limited mobility, current treatments do not reliably regenerate native hyaline cartilage or halt cartilage degeneration at these defect sites. Novel therapeutics aimed at addressing limitations of current clinical cartilage regeneration therapies increasingly focus on allogeneic cells, specifically mesenchymal stem cells (MSCs), as potent, banked, and available cell sources that express chondrogenic lineage commitment capabilities. Innovative tissue engineering approaches employing allogeneic MSCs aim to develop three-dimensional (3D), chondrogenically differentiated constructs for direct and immediate replacement of hyaline cartilage, improve local site tissue integration, and optimize treatment outcomes. Among emerging tissue engineering technologies, advancements in cell sheet tissue engineering offer promising capabilities for achieving both in vitro hyaline-like differentiation and effective transplantation, based on controlled 3D cellular interactions and retained cellular adhesion molecules. This review focuses on 3D MSC-based tissue engineering approaches for fabricating “ready-to-use” hyaline-like cartilage constructs for future rapid in vivo regenerative cartilage therapies. We highlight current approaches and future directions regarding development of MSC-derived cartilage therapies, emphasizing cell sheet tissue engineering, with specific focus on regulating 3D cellular interactions for controlled chondrogenic differentiation and post-differentiation transplantation capabilities.
Introduction
A plethora of therapies are clinically available for treating articular cartilage defects, all seeking to improve outcomes and mitigate osteoarthritis (OA) in the global population [1][2][3][4]. Advanced approaches employ cells prepared in vitro to increase control of cell populations, phenotypes, and dosing, with the goal of achieving more reliable hyaline cartilage regeneration [5,6]. Mesenchymal stem cells (MSCs) have been thoroughly researched as cell sources for cartilage tissue engineering due to accessibility, extended in vitro expansion capabilities, and chondrogenic lineage capacity [7][8][9][10]. However, MSC therapies are often limited by poor survival, engraftment, and control of MSC chondrogenic differentiation fate in vivo [7,11]. Therefore, one unique method of advanced cartilage 2 of 22 regeneration aims to prepare MSC-derived cartilage constructs that express hyaline-like characteristics at the time of transplantation with the goal of more rapidly and reliably replacing damaged hyaline articular cartilage [12].
To prepare these MSC-derived pre-differentiated cartilage therapies, design considerations must include both the extent and stability of in vitro chondrogenesis and in vivo transplantation capabilities to ensure robust and lasting hyaline regeneration. MSC chondrogenic potential is known to be increased in three-dimensional (3D) structures [13][14][15][16]; therefore, development of tailored 3D constructs that promote transition of cells toward stable hyaline-like cartilage in vitro is crucial for success. Three-dimensional structures influence chondrogenesis in part by increasing 3D cellular interactions compared to two dimensional (2D) constructs [17,18]. As a result, developing a 3D platform that optimizes and controls these cellular interactions should subsequently improve the final construct's hyaline-chondral characteristics.
Even when cells are successfully differentiated, delivery and retention in the joint represent two major translational hurdles. Traditional suspended cell injections for cartilage regeneration demonstrate no homing ability if injected intravenously and poor engraftment and cellular retention at injured or diseased sites even when administered directly to the synovial space, offering only transient pain reduction [8,19,20]. Recent data show only~3% cellular retention in the knee joint a few days post-injection with very few cells attached to the cartilage surface [8]. Obvious limitations in cell delivery result in inconsistent and suboptimal regeneration in vivo. Therefore, many current cell therapies utilize support materials to maintain cellular localization at the injury or defect sites [21,22]. Unfortunately, these additional support materials present added biocompatibility concerns [23]. As a result, MSC cartilage tissue engineering research has increasingly trended toward developing scaffold-free platforms that not only offer superior in vitro chondrogenic differentiation and optimized control 3D cellular interactions, but also support direct, unassisted delivery for robust engraftment with improved surgical versatility. Of these approaches, cell sheet tissue engineering specifically presents a unique scaffold-free platform that retains endogenous 3D cellular interactions and tissue-like organization for promoting stable in vitro hyaline-like chondrogenesis, while preserving intact adhesion molecules along the transplantation surface for direct in vivo transplantation [12,24,25]. The goal of this review is to discuss current and future directions in the development of tissue-engineered 3D MSC-derived hyaline cartilage, emphasizing cell sheet tissue engineering, with specific focus on controlled chondrogenic differentiation through 3D cellular interactions and post-differentiation engraftment capabilities.
Hyaline Cartilage Structure and Function
Hyaline articular cartilage is an avascular and aneural tissue that covers articulating surfaces, such as the knee, and has minimal intrinsic ability to regenerate without intervention. Hyaline cartilage structure and function ( Figure 1) have been thoroughly reviewed in recent literature [9,[26][27][28][29][30]. Briefly, it has a unique architecture and biochemical composition, comprising a sole cell type, chondrocytes, and their deposited extracellular matrix (ECM). Hyaline cartilage is characterized by predominantly rounded chondrocytes, organized in lacunae, at low cellular density, and the ECM deposited by these chondrocytes is rich in collagens type II, type IX, and type XI in addition to aggrecan, hyaluronic acid, glycosaminoglycans (GAGs), and other proteoglycans. The structure and relationship between the type II collagen and proteoglycans play a crucial role in providing hyaline cartilage's shock absorbing functionality through releasing and absorbing water in response to joint loading. Distinct from hyaline cartilage, fibrocartilage, a common clinical outcome from chondral defect therapies, is characterized by dense-packed, aligned collagen fibrils (rich in type I relative to type II collagen) lacking robust dynamic compression capabilities of hyaline cartilage [27,31,32]. To successfully develop hyaline cartilage replacement therapies, tissue-engineered cartilage constructs must satisfy key design specifications relative to native hyaline cartilage: be biocompatible, comprise viable rounded chondrocytes in Figure 1. Hyaline cartilage structure and biochemical composition. Schematic representation of hyaline cartilage zonal structure and variable cellular distribution, morphology, collagen organization, and biochemical composition. Created with BioRender.com.
Current Clinical Cartilage Regeneration Therapies
Articular cartilage defects are increasingly responsible for morbidity and compromised quality of life in the global population and remain a significant precursor to osteoarthritis (OA) [28,[33][34][35]. Based on a compelling need to regenerate durable cartilage in these defects, the past several decades witnessed numerous new therapeutic strategies designed to restore functional hyaline cartilage, increase patient quality of life, and reduce degenerative joint disease progression [21,28,36,37]. A multitude of clinical therapeutic options are currently available for treating chondral and osteochondral articular cartilage defects, thoroughly summarized in recent reviews [1][2][3][4]. These therapies include arthroscopic debridement, osteochondral allograft transplant (OCA), osteochondral autograft transplantation (OAT), mosaicplasty, and marrow stimulation techniques, among others [1][2][3][4]. Optimal therapy selection depends on numerous factors such as grade and location of the defect, patient age, and desired activity level.
For most smaller focal chondral defects, marrow stimulation, such as microfracture, is often the first-line treatment option [1,2,38]. Microfracture involves mechanical stimulation of the subchondral bone to repopulate the defect with autologous bone marrow that
Current Clinical Cartilage Regeneration Therapies
Articular cartilage defects are increasingly responsible for morbidity and compromised quality of life in the global population and remain a significant precursor to osteoarthritis (OA) [28,[33][34][35]. Based on a compelling need to regenerate durable cartilage in these defects, the past several decades witnessed numerous new therapeutic strategies designed to restore functional hyaline cartilage, increase patient quality of life, and reduce degenerative joint disease progression [21,28,36,37]. A multitude of clinical therapeutic options are currently available for treating chondral and osteochondral articular cartilage defects, thoroughly summarized in recent reviews [1][2][3][4]. These therapies include arthroscopic debridement, osteochondral allograft transplant (OCA), osteochondral autograft transplantation (OAT), mosaicplasty, and marrow stimulation techniques, among others [1][2][3][4]. Optimal therapy selection depends on numerous factors such as grade and location of the defect, patient age, and desired activity level.
For most smaller focal chondral defects, marrow stimulation, such as microfracture, is often the first-line treatment option [1,2,38]. Microfracture involves mechanical stimulation of the subchondral bone to repopulate the defect with autologous bone marrow that contains populations of regenerative stem cells [39]. Microfracture has shown clinical success in filling small focal chondral defects of the knee (<3.6 cm 2 ) and reducing pain short-term [40,41]. However, long-term follow-up data show that regenerated cartilage tissue is predominantly fibrocartilage with subsequent higher failure after two to five years [40,42,43]. Limitations of microfracture are often attributed to the low relative population of endogenous multipotent stem cells recruited to blood clots that fill the defect post-surgery, hindering the therapy's regenerative capacity [44].
Advanced approaches to regenerate native cartilage in chondral defects aim to specifically prepare the patients' own chondral cells (autologous chondrocytes from cartilage biopsy) ex vivo to support greater control of cell culture population, phenotype, and dosing upon re-implantation, with the goal of more reliable hyaline cartilage regeneration and enduring function in vivo. Autologous chondrocytes are the primary cell source used in these clinical cell-based cartilage regeneration therapies because chondrocytes are the primary cell source in articular cartilage [28,29]. Significantly, autologous cell sourcing presents few immunological hurdles based on the patient being both the donor and recipient of the ex vivo-processed cells. The first cell-based approach to treat articular cartilage defects-autologous chondrocyte implantation (ACI)-was FDA-approved in 1997 [45] with several new "generations" of ACI reported recently [1,28,44,46,47]. ACI harvests autologous chondrocytes from a healthy, low load-bearing area of the patient's cartilage, followed by cell expansion ex vivo, and then staged reimplantation of the expanded cells back to the defect as suspended cell injections under a sutured periosteal flap [22]. Unlike microfracture, ACI provides more reliable and improved pain reduction and mobility outcomes at 5-year follow-ups [48,49]. Further development of this therapy led to the use of porcine collagen support membranes for matrix-supported autologous cultured chondrocyte therapy (MACI) [22], FDA-approved in 2016 [50]. The collagen support membrane is intended to preserve chondrocyte characteristics during culture and retain cells in the defect site during transplantation. MACI has shown some in vivo therapeutic benefit in treating chondral defects [22,49,51,52]. Short-term 2-year clinical follow-ups reported 75% of tissue filling the defects was hyaline-like [53], and long-term 15-year follow-ups showed increases in Lysholm [54], International Knee Documentation Committee (IKDC) [55], and Tegner activity [56] scores compared to preoperative baselines [57]. However, superiority of MACI relative to ACI remains controversial. In randomized trials with 2-year follow-ups, no significant improvements (IKDC and Tegner activity scores) were noted for MACI compared to ACI, with ACI reporting slightly better International Cartilage Repair Society (ICRS) [58] and Lysholm functionality scores [1,3,22,59,60]. Few additional cell-based therapies have gained clinical approval in recent decades around the world, but ACI and MACI remain the only cell and tissue engineering cartilage therapies approved in the U.S. (Table 1). Hundreds more are currently in the clinical trial pipeline [1,21,61] (www.clinicaltrials.gov; accessed on 1 March 2021).
Limitations of Current Autologous Cell-Based Cartilage Regeneration Therapies
Despite clinical availability of several generations of these autologous cell-based cartilage regeneration therapies, clinical outcomes remain heterogeneous and unconvincing, and difficulties persist in enabling broader patient population applications [1,4,43,76]. One primary limitation of these therapies is reliance on autologous chondrocyte cell sourcing. Chondrocytes are known to dedifferentiate during in vitro culture and expansion, transitioning during preparation from their mature phenotype to fibroblast-like phenotypes, and also exhibit limited capacity for in vitro expansion before becoming senescent [1,2]. Autologous sourcing of these chondrocytes also introduces patient burden through multiple surgeries, donor site morbidity, and extended time between donation and treatment. Additionally, cell quality and quantity from autologous sources are donor-dependent, increasing procedural cost and complexity [6,25,45,77,78], and making it difficult, if not impossible, to predict, control, and standardize therapeutic potency [19,79]. Due to these limitations, further efforts focus on selecting improved, appropriate cell sources for cartilage tissue engineering and regenerative purposes. Greater consistency and control over cellular characteristics are needed to ensure reliable chondrogenic construct production and understand implant performance. Moreover, these sources should ideally be broadly applicable and efficacious for treating a wide range of patient populations [4,19,48,80,81].
Allogeneic Mesenchymal Stem Cells as Promising Cell Sources for Cartilage Applications
Developing tissue-engineered constructs for articular cartilage focal defect therapies increasingly focuses on transitioning from non-standard, heterogeneous autologous to standardized allogeneic cell sourcing [21,79,82]. In contrast to autologous cell sourcing issues, allogeneic cells offer greater control over cell quality and characteristics, improved accessibility, and potentially broader use [5,6,83]. Allogeneic sourcing also permits greater in vitro expansion capacity, and cells with various profiles and characteristics can be profiled, selected, validated, and banked, enabling "off-the-shelf" products [5,6,83]. Concerns regarding allogeneic cell immune rejection remain. However, with a long history of osteochondral allografting [38,84] and new insights into immune-matching [85,86], paired with reported immunomodulatory characteristics of certain allogeneic cell sources [87,88], translational prospects for human allogeneic cells are seemingly more feasible.
Advanced cell-based therapies also seek to replace chondrocytes with MSCs as the chondrogenic cell source. Chondrocyte sourcing is tissue-specific, whereas MSCs are adult progenitor cells isolated from a variety of tissues (e.g., bone marrow, adipose, dental pulp, umbilical cord, etc.), offering a widely accessible cell source [14,15,87,89]. Additionally, chondrocytes are limited by de-differentiation during culture and passaging, while MSCs exhibit strong capacity for in vitro expansion while maintaining their identity and unique capacity for in vitro self-renewal [11,16,82,88,90]. Although not standardized, MSC identity is generally confirmed via several accepted surface markers: CD90 + , CD44 + , CD73 + , CD105 + , CD11 − , CD34 − , CD45 − [91,92]. When selecting appropriate MSC sources it is important to account and test for reduced in vitro self-renewal and differentiation capacities induced by extensive passaging, occurring at different rates for different MSCs [11,83,[93][94][95]. Specific to chondral regeneration, MSCs have utility for fabricating cartilage in vitro based on their multilineage differentiation potential, including the capacity to transition to chondrocytes [15,16,87,88]. Many reports have described undifferentiated MSC therapies exhibiting some therapeutic efficacy in delaying cartilage degeneration and reducing pain [8,19,96,97]. However, in vitro and in vivo MSC differentiation fate and maintenance are still not easily controlled, limiting these therapies' capabilities to induce lasting cartilage regeneration [7,20,98]. Advanced approaches in MSC-based cartilage regeneration aim to employ allogeneic MSC sources and exploit innate MSC chondrogenic potential to better control their differentiation in vitro, preparing hyaline-like transplantable constructs for rapid structural cartilage regeneration through direct tissue replacement in vivo, applicable to a broader range of patients with a more consistent cell-based product.
Three-Dimensional Culture for MSC Chondrogenesis
MSC-derived hyaline-like cartilage constructs prepared in vitro actively exploit recent advances in 3D culture systems ( Figure 2). maintenance are still not easily controlled, limiting these therapies' capabilities to induce lasting cartilage regeneration [7,20,98]. Advanced approaches in MSC-based cartilage regeneration aim to employ allogeneic MSC sources and exploit innate MSC chondrogenic potential to better control their differentiation in vitro, preparing hyaline-like transplantable constructs for rapid structural cartilage regeneration through direct tissue replacement in vivo, applicable to a broader range of patients with a more consistent cell-based product.
Three-Dimensional Culture for MSC Chondrogenesis
MSC-derived hyaline-like cartilage constructs prepared in vitro actively exploit recent advances in 3D culture systems ( Figure 2).
Figure 2.
Categories of three-dimensional culture platforms for in vitro mesenchymal stem cell (MSC) differentiation. Cellular interactions (yellow linkers) and surface interface adhesion molecules (green markers) among the constructs varies in response to biomaterials and construct cellular organization. Created with BioRender.com.
MSC multipotency enables directed cell differentiation to hyaline-like chondrocyte phenotypes in vitro within 3D cultures both with and without supporting biomaterials [14][15][16]87]. Successful MSC chondrogenesis is generally verified by detecting positive expression of hyaline cartilage markers within the cells and their deposited ECM (e.g., Sox9, sulfated proteoglycans, type II collagen, and aggrecan) [7,16,99,100]. A persisting limitation in MSC chondrogenesis is the expression of transient hyaline-like cartilage phenotypes with the inevitable and undesired transition toward hypertrophic or fibrocartilage phenotypes [7][8][9]44]. Therefore, hyaline differentiation must also exhibit persistent negative marker expression of type X and type 1 collagens and MMP13 [7,16,99,100]. Researchers have long noted that 3D culture conditions and 3D cellular interactions are essential for inducing and maintaining this stable hyaline-like chondrogenesis [7,14,18,99,[101][102][103][104]. Standard 2D culture conditions limit chondrogenesis because they are unable to promote requisite 3D cellular interactions and structures associated with chondrogenic condensation and further maturation [17,105,106]. Unlike traditional adherent 2D cell culture methods, 3D culture platforms allow cells to assume rounded morphologies associated with mature chondrocytes [13,107,108] and promote 3D cellular interactions, mimicking early MSC multipotency enables directed cell differentiation to hyaline-like chondrocyte phenotypes in vitro within 3D cultures both with and without supporting biomaterials [14][15][16]87]. Successful MSC chondrogenesis is generally verified by detecting positive expression of hyaline cartilage markers within the cells and their deposited ECM (e.g., Sox9, sulfated proteoglycans, type II collagen, and aggrecan) [7,16,99,100]. A persisting limitation in MSC chondrogenesis is the expression of transient hyaline-like cartilage phenotypes with the inevitable and undesired transition toward hypertrophic or fibrocartilage phenotypes [7][8][9]44]. Therefore, hyaline differentiation must also exhibit persistent negative marker expression of type X and type 1 collagens and MMP13 [7,16,99,100]. Researchers have long noted that 3D culture conditions and 3D cellular interactions are essential for inducing and maintaining this stable hyaline-like chondrogenesis [7,14,18,99,[101][102][103][104]. Standard 2D culture conditions limit chondrogenesis because they are unable to promote requisite 3D cellular interactions and structures associated with chondrogenic condensation and further maturation [17,105,106]. Unlike traditional adherent 2D cell culture methods, 3D culture platforms allow cells to assume rounded morphologies associated with mature chondrocytes [13,107,108] and promote 3D cellular interactions, mimicking early condensation stages during cartilage development and playing an important role in stabilizing terminally differentiated cartilage [13,99,101].
The most common method for assessing MSC chondrogenic potential in vitro employs spheroids [116], usually as pellet or micromass cultures [14][15][16]. Beyond their simplicity of fabrication, these cultures allow cells to self-aggregate and assume rounded morphologies while establishing 3D cellular interactions necessary for chondrogenesis [14][15][16]117,118]. Although pellet cultures allow cells to assume rounded morphologies, these cultures regularly produce heterogenous tissue in vitro that does not mimic native cartilage in structure, phenotype, or function. Such heterogeneity is often attributed to media and oxygen diffusion limitations influencing 3D cellular interactions, resulting in variable differentiation between the pellet's periphery and hypoxic core [119][120][121].
In an attempt to offer improved control over cell differentiation, many MSC differentiation platforms employ natural or synthetic biomaterial scaffolds, such as collagens, alginates, hyaluronic acid, agarose, chitosan, decellularized "native" ECM, and polyglycolic acid (PGA)/polylactic acid (PLA), to accommodate cells in 3D structures and promote MSC chondrogenic differentiation [107,122,123]. These biomaterial scaffolds permit a high degree of control over 3D construct architecture, a key component in controlling MSC chondrogenesis [124,125]. Extensive work is reported for further tailoring these biomaterial scaffolds, via fabrication techniques (e.g., bioprinting, electrospinning, molding, etc.) and combinations of cell ligands and binding motifs, macro-and micro-structure, stiffness, and other biomaterials properties [23,98,107,123,[126][127][128] seeking to promote and maintain cellular interactions and functionality, supporting transitions toward hyaline-like phenotypes [23,122,129]. However, these approaches are often limited by poor cell-cell communication due to interruptive scaffold materials hindering requisite direct cell-cell and cell-ECM interactions, hyaline-like cell transitions, and reliable hyaline-like phenotypic preservation [23,124,129].
Scaffold-free approaches offer increasing benefits compared to scaffold-based methods, supporting MSC differentiation in 3D conditions, within their endogenous ECM and in continuous, direct 3D contact, promoting necessary cellular interactions without scaffold interference [23]. Scaffold-free cell-based constructs can also accommodate higher cell densities than scaffold-based approaches, and despite native cartilage's intrinsic low cell density [30,130], cell-dense constructs are recognized as necessary for promoting in vitro MSC chondrogenesis [15,118,131,132]. Recently proposed advanced scaffold-free methods employ high-density seeding cultures that create disc-like cartilage constructs in vitro by seeding MSCs into porous cell culture inserts at very high concentrations [100,[133][134][135][136]. These high-density 3D cultures induce more homogenous chondrogenesis compared to pellet cultures, and produce more ergonomic implant forms to more completely fill cartilage defects [100,[133][134][135]. However, these approaches are hindered by exorbitant cell seeding densities and limited control over cellular interactions in culture, based solely on cell aggregation forced by over-confluence [100,[133][134][135][136]. Such high-density 3D constructs are sometimes referred to as "cell sheets" [134,135], but differ significantly from temperatureresponsive culture dish (TRCD) derived cell sheets discussed in Sections 8 and 9 based on their (1) three-dimensionality achieved solely through over-confluent culture, and (2) harvest methods reliant on mechanical detachment that damage the cultured construct's adhesion interface. Despite extensive work focused on promoting in vitro hyaline-like chondrogenesis within a wide range of 3D culture constructs, these platforms are still broadly unable to sufficiently control both structure and 3D cellular interactions, hindering resulting chondrogenic stability and homogeneity in vitro.
Transplantation Capabilities of 3D MSC Chondrogenic Cultures
Even when 3D culture platforms achieve hyaline-like chondrogenesis in vitro, these resulting cellular constructs are still unable to directly adhere and interface with host tissues in vivo. Most constructs require additional transplantation support materials (e.g., suturing, fibrin glue, periosteal flap, etc.), increasing biocompatibility concerns and disrupting direct communication between the transplanted cells and host tissue [19,27,137,138]. Limited unassisted in vivo tissue engraftment is often attributed to chondrogenic constructs' inadequate endogenous expression of surface adhesion molecules [12,80,107,123,124,129,139]. Poor in vivo tissue site engraftment leads to construct delamination, loss of transplanted cell viability, mechanical instability, and decreased integration with host tissue, common precursors for fibrocartilage tissue formation [26] and suboptimal pre-clinical in vivo outcomes [8,27,31,140,141]. Discrepancies between in vitro and in vivo pre-clinical results may be partly due to the high variability among animal models employed [142][143][144][145][146][147], but inferior engraftment and retention remain driving factors of pre-clinical failure regardless of the model employed [26].
Cartilage tissue transplant failure is also attributed to insufficient interfacial properties [148]. Native hyaline cartilage exhibits a low coefficient of friction at the joint interface, allowing free sliding of adjacent cartilage surfaces under high pressure during joint articu-Cells 2021, 10, 643 9 of 22 lation [149,150]. To successfully replace hyaline cartilage at focal defect sites, transplanted cartilage constructs must be able to not only adhere and engraft into the defect site, but also present a suitable articulating surface that mitigates excessive frictional forces during joint function. As superficial chondrocytes naturally produce lubricating agents, such as lubricin and hyaluronic acid [31,151], some approaches focus on functionalizing the cells within 3D structures to tailor their secretion abilities and recreate this lubricated articular surface [7,152]. Other approaches, specifically those employing cell-seeded hydrogels, focus on selecting scaffold biomaterials with low intrinsic coefficients of friction [31,153]. However, the inability of current constructs to both strongly adhere and recapitulate this lubrication interface increases associated friction during articulation, causing pain, abnormal stress and wear on the transplant, and increased risk of tissue delamination [149].
Despite 3D cell delivery platforms being designed to create hyaline-like chondrogenic constructs capable of engraftment and retention at the defect site, to date, no platform has yielded robust evidence of success, necessitating further investigation in controlled clinical trials to verify translational potential of these therapies [1,21,23,154]. A clear unmet need persists for improved 3D MSC platforms that not only control 3D cellular interactions in vitro to reliably yield more stable hyaline-like cartilage constructs, but also enhance their adhesion for mechanical and physiological integration in vivo to better address current translational limitations in MSC-based cartilage regeneration.
Cartilage tissue transplant failure is also attributed to insufficient interfacial properties [148]. Native hyaline cartilage exhibits a low coefficient of friction at the joint interface, allowing free sliding of adjacent cartilage surfaces under high pressure during joint articulation [149,150]. To successfully replace hyaline cartilage at focal defect sites, transplanted cartilage constructs must be able to not only adhere and engraft into the defect site, but also present a suitable articulating surface that mitigates excessive frictional forces during joint function. As superficial chondrocytes naturally produce lubricating agents, such as lubricin and hyaluronic acid [31,151], some approaches focus on functionalizing the cells within 3D structures to tailor their secretion abilities and recreate this lubricated articular surface [7,152]. Other approaches, specifically those employing cellseeded hydrogels, focus on selecting scaffold biomaterials with low intrinsic coefficients of friction [31,153]. However, the inability of current constructs to both strongly adhere and recapitulate this lubrication interface increases associated friction during articulation, causing pain, abnormal stress and wear on the transplant, and increased risk of tissue delamination [149].
Despite 3D cell delivery platforms being designed to create hyaline-like chondrogenic constructs capable of engraftment and retention at the defect site, to date, no platform has yielded robust evidence of success, necessitating further investigation in controlled clinical trials to verify translational potential of these therapies [1,21,23,154]. A clear unmet need persists for improved 3D MSC platforms that not only control 3D cellular interactions in vitro to reliably yield more stable hyaline-like cartilage constructs, but also enhance their adhesion for mechanical and physiological integration in vivo to better address current translational limitations in MSC-based cartilage regeneration.
Cell Sheet Technology as a Transplantable 3D Tissue-Like Platform
Cell sheet technology supports fabrication of transplantable, scaffold-free, 3D, tissuelike cell constructs [155][156][157][158][159] (Figure 3). The cell sheet technology developed by Okano et al. employs poly(N-isopropylacrylamide) (PIPAAm)-grafted temperature-responsive culture dishes (TRCDs) that facilitate cell adhesion and growth at 37 • C [158][159][160]. Below the PIPAAm lower critical solution temperature (32 • C), cells spontaneously detach from the culture surface, bypassing typical culture requirements for damaging enzymatic cell harvesting [160,162]. This temperaturemediated detachment retains endogenous cell-cell and cell-ECM interactions and preserves cellular environments, allowing cultured cells to be harvested as intact cell sheets [83,156,157,160,[162][163][164]. As cells are seeded and grown under adherent 2D conditions, this abrupt temperature-mediated detachment prompts established cytoskeletal filaments and retained ECM to naturally contract when released from culture surfaces [165,166]. This post-detachment cell sheet contraction spontaneously yields 3D, multi-nuclei thick, scaffold-free cell sheet structures [12,161]. Cell sheet three-dimensionality can be further controlled by cell sheet layering to produce tissues of specified thicknesses and cellular densities, even combining cell sheets from different cell sources [157,[167][168][169][170]. Cell sheet post-detachment contraction and layering both increase 3D cellular interactions, areas of hypoxia within the construct, and functionality relative to suspended cells and 2D conditions [167,171,172].
In addition to promoting 3D architecture with increased 3D cellular interactions, cell sheets naturally retain innate surface receptors, ECM, and tissue adhesion capabilities, allowing spontaneous engraftment to tissue sites and rapid initiation of direct cell-cell communication [156,157]. Cell sheets fabricated from a wide range of cell sources have been applied to a multiple tissue targets and show significant adhesion and localization capabilities [157,173,174]. Specifically, for cartilage regeneration therapies, significant translational work has focused on cell sheet technology approaches for repairing and replacing hyaline cartilage using various cell sources and preparation methods ( Table 2). In vitro/in vivo (allogeneic rabbit) Layering [173] Rat articular chondrocytes and synoviocytes In vivo (allogeneic rat) Layering [175] Rabbit articular chondrocytes and synoviocytes In vivo (allogeneic rabbit) Layering [176] Porcine articular chondrocytes In vivo (allogeneic minipig) Layering [177] Human articular chondrocytes In vitro Co-culture with synoviocytes + layering [178] Human articular chondrocytes In vitro Co-culture with synoviocytes + layering [179] Human articular chondrocytes In vivo (xenogeneic immunosuppressed rabbit) Co-culture with synoviocytes + layering [180] Human articular chondrocytes and synoviocytes In vivo (athymic rat) Co-culture with synoviocytes + layering [181] Autologous human articular chondrocytes (with microfracture) In vivo (autologous human-small cohort clinical study) Co-culture with synoviocytes + layering [182] Rat articular chondrocytes In vitro/in vivo (allogeneic rat) None [183] Human juvenile polydactyly chondrocytes In vitro/in vivo(xenogeneic immunosuppressed rabbit) None [184] Human juvenile polydactyly chondrocytes In vivo (athymic rat) None [25] Human endometrial gland-derived MSCs In vitro Layering [171] Human bone marrow-derived MSCs In vitro Chondrogenic induction medium + hypoxia (5% O 2 ) [12] Cell sheet technology employing chondrocyte sources has shown preliminary success in both pre-clinical models and small cohort clinical studies [24,25,173,[175][176][177][180][181][182][183][184]. Chondrocyte sheets adhere directly and spontaneously to cartilage tissue via retained endogenous ECM and adhesion proteins. Notably, this strength of defect site adhesion for the undifferentiated chondrocyte sheets is sufficient to allow initial defect retention without suturing, and to withstand knee joint mechanical forces while maintaining longterm localization of transplanted cells [24,173,177,180,182,185]. This engraftment capability facilitates successful chondrocyte sheet induction of hyaline-like cartilage regeneration in articular cartilage focal chondral defects by 4 weeks post-transplantation [24,25,173,177,[180][181][182] (Figure 4a-d).
tained endogenous ECM and adhesion proteins. Notably, this strength of defect site adhesion for the undifferentiated chondrocyte sheets is sufficient to allow initial defect retention without suturing, and to withstand knee joint mechanical forces while maintaining long-term localization of transplanted cells [24,173,177,180,182,185]. This engraftment capability facilitates successful chondrocyte sheet induction of hyaline-like cartilage regeneration in articular cartilage focal chondral defects by 4 weeks post-transplantation [24,25,173,177,[180][181][182] (Figure 4a-d).
Three-dimensional MSC Sheets as In Vitro Platforms for Fabricating Transplantable Hyaline-Like Cartilage
Emerging cell sheet approaches prepare in vitro chondrogenically differentiated MSC sheets that are directly transplantable in vivo, which should support more rapid hyaline cartilage replacement at defect sites for future in vivo regenerative therapies. Reliable fabrication of 3D MSC sheets increases cell-cell interactions, promotes hyaline-like chondrogenesis, and retains construct adhesion capabilities [12], all of which are essential to support robust and direct replacement of damaged or missing hyaline cartilage. Sheet-
Three-Dimensional MSC Sheets as In Vitro Platforms for Fabricating Transplantable Hyaline-Like Cartilage
Emerging cell sheet approaches prepare in vitro chondrogenically differentiated MSC sheets that are directly transplantable in vivo, which should support more rapid hyaline cartilage replacement at defect sites for future in vivo regenerative therapies. Reliable fabrication of 3D MSC sheets increases cell-cell interactions, promotes hyaline-like chondrogenesis, and retains construct adhesion capabilities [12], all of which are essential to support robust and direct replacement of damaged or missing hyaline cartilage. Sheet-enhanced 3D cellular interactions specifically benefit MSC chondrogenesis in vitro, resulting in stable hyaline-like phenotypes and delayed hypertrophic transitions compared to standard pellet cultures [12]. Cell sheet 3D manipulation affords greater control over the induction of pro-chondrogenic 3D cell-cell and cell-ECM interactions and increased control of the final chondrogenic cell sheet characteristics ( Figure 5).
Cell sheet technology employs multiple manipulation techniques for promoting specific pro-chondrogenic interactions. Post-detachment cell sheet contraction, occurring spontaneously following temperature-mediated detachment from adherent culture, and sheet multilayering are primary strategies used to control and influence cellular interactions and MSC chondrogenic differentiation in scaffold-free cell sheet forms [25,157,[167][168][169][170][171][172] (Figure 5a). Cell sheet contraction can be modified by changing cell seeding density, culture time, MSC source, or use of removable support membranes [155,166,167,186]. Cell sheet multilayering has also been utilized extensively in various cell sheet tissue engineering applications [167,169,170,187,188]. Specifically, multilayering chondrocyte sheets has been shown to directly increase 3D cellular interactions, promoting enhanced chondrogenic characteristics within those sheets [173,178,179]. Moreover, layering endometrial cell sheets increased glycosaminoglycan and collagen development within as little as 24 h [171] (Figure 5b). This multilayering manipulation should facilitate similar control of 3D cellular interactions within MSC-derived sheets, as well as construct thickness and density. These factors directly impact the oxygen tension and hypoxic conditions within the MSC construct, stimulating more controlled transitions to hyaline-like phenotypes in vitro. Multilayering may also prompt more rapid chondrogenesis, decreasing MSCderived hypertrophic characteristics commonly associated with extended in vitro media induction [18,103].
Cells 2021, 10, x 11 of 21 enhanced 3D cellular interactions specifically benefit MSC chondrogenesis in vitro, resulting in stable hyaline-like phenotypes and delayed hypertrophic transitions compared to standard pellet cultures [12]. Cell sheet 3D manipulation affords greater control over the induction of pro-chondrogenic 3D cell-cell and cell-ECM interactions and increased control of the final chondrogenic cell sheet characteristics ( Figure 5). Cell sheet technology employs multiple manipulation techniques for promoting specific pro-chondrogenic interactions. Post-detachment cell sheet contraction, occurring spontaneously following temperature-mediated detachment from adherent culture, and sheet multilayering are primary strategies used to control and influence cellular interactions and MSC chondrogenic differentiation in scaffold-free cell sheet forms [25,157,[167][168][169][170][171][172] (Figure 5a). Cell sheet contraction can be modified by changing cell seeding density, culture time, MSC source, or use of removable support membranes [155,166,167,186]. Cell sheet multilayering has also been utilized extensively in various cell sheet tissue engineering applications [167,169,170,187,188]. Specifically, multilayering chondrocyte sheets has been shown to directly increase 3D cellular interactions, promoting enhanced chondrogenic characteristics within those sheets [173,178,179]. Moreover, layering endometrial cell sheets increased glycosaminoglycan and collagen development within as little as 24 h [171] (Figure 5b). This multilayering manipulation should facilitate similar control of 3D cellular interactions within MSC-derived sheets, as well as construct thickness and density. These factors directly impact the oxygen tension and hypoxic conditions within the MSC construct, stimulating more controlled transitions to hyaline-like phenotypes in vitro. Multilayering may also prompt more rapid chondrogenesis, decreasing MSC-derived hypertrophic characteristics commonly associated with extended in vitro media induction [18,103].
In addition to promoting stable hyaline-like chondrogenesis in vitro, MSC sheets retain strong adhesion capabilities after chondrogenic differentiation [12]. Post-differentiation temperature-mediated harvest does not damage cell sheet characteristics, thereby allowing maintenance of critical adhesion molecule expression for cells along the basal side of the sheet. MSC-derived hyaline-like cell sheets can strongly adhere to fresh ex vivo In addition to promoting stable hyaline-like chondrogenesis in vitro, MSC sheets retain strong adhesion capabilities after chondrogenic differentiation [12]. Post-differentiation temperature-mediated harvest does not damage cell sheet characteristics, thereby allowing maintenance of critical adhesion molecule expression for cells along the basal side of the sheet. MSC-derived hyaline-like cell sheets can strongly adhere to fresh ex vivo cartilage tissue and rapidly initiate mechanical and biochemical signaling interactions between the cell sheet and adjacent native cartilage [12]. Based on previous adhesion studies conducted with chondrocyte sheets [173] and their successful integration and maintained adhesion in vivo [24,177,180,182], these adhesion capabilities of chondrogenically differentiated MSC sheets are expected to promote similar stable engraftment and enhanced cellular communication in this environment.
Cell sheet in vitro chondrogenesis studies support prior assertions that three-dimensional cell interactions play essential roles in fabrication and stability of in vitro hyaline-like cartilage. Furthermore, cell sheet manipulation techniques allow greater control over these 3D cellular interactions and related hypoxic culture conditions, while maintaining known cell sheet adhesion capabilities. Additional application of hypoxic culture conditions for chondrogenic induction not only significantly increases the MSC sheets' chondrogenic capacity, but should also condition them for the hypoxic in vivo environment, allowing greater retention of cellular functionality post-transplantation. These chondrogenic capacity and adhesion capabilities position MSC cell sheet technology as a prospective next-generation platform for fabricating future translational allogeneic MSC therapies offering direct, unassisted transplantation of hyaline-like cartilage constructs for improved future articular cartilage regeneration. To improve upon current cell-based approaches for cartilage regeneration in human defects, these implanted MSC-derived cartilage sheets will have to demonstrate key regenerative behaviors in vivo, notably: complete filling of the focal defect, lateral and basal integration with the host tissue, lasting retention of hyaline-like phenotypes within the defect, and mechanical properties similar to native cartilage once integrated.
Summary
Articular cartilage defects represent inciting events and a significant cause of degenerative joint disease with inevitable progression to generalized OA [28,[33][34][35]. Although many clinical therapies exist for treating these defects, none achieve lasting, robust regeneration of hyaline cartilage [1,4,43,76]. Advanced cell therapy products are continually being developed to address the limitations of current clinical therapies, but few have shown much clinical promise to date in practically addressing diverse chondral defects [1,21,23,154]. Overall, tissue engineering cartilage therapies are still largely limited in their control over in vitro cellular interactions necessary for producing robust hyaline-like cartilage and inconsistent in vivo engraftment, hindering integration with the host tissue and lasting replacement of hyaline cartilage [23,124,129]. Some 3D MSC-based approaches, specifically those employing banked, standardized allogeneic MSCs within scaffold-free 3D constructs, offer very promising platforms for producing cartilage constructs in vitro via controlled 3D structures and key cellular interactions that are capable of inducing reliable, rapid regeneration of hyaline-like cartilage in vivo in articular cartilage focal defects.
Although in vitro chondrogenic differentiation is extensively published for pellet cultures, cell seeded scaffolds, and scaffold-free high-density seeding cultures [14][15][16]87], these 3D constructs are limited in their abilities to achieve both robust hyaline-like differentiation and direct, unassisted transplantation to defect sites [1,21,23,154]. To address these concerns, cell sheet tissue engineering constructs afford improved control of 3D cellular interactions, maintenance of chondrogenic characteristics via established manipulation techniques, and optimize endogenous adhesion abilities [83,156,157,160,[162][163][164]. To date, autologous chondrocyte cell sheets have exhibited experimental and some clinical success in adhering, surviving, and inducing regeneration in articular cartilage defects [24,173,177,180,182,185,189]. These data provide an important precedent for further development of cell sheet therapies that support more rapid cartilage regeneration. The chondral regeneration field is currently transitioning toward the creation of single-stage, immediately available cell-based chondral restoration options [21]. In this vein, cell sheet tissue engineering employing allogeneic MSCs presents a unique platform capable of (1) producing stable in vitro hyaline-like cartilage from banked MSCs, (2) providing an off-the-shelf, pre-validated cartilage tissue construct without biomaterials support, and (3) maintaining and sustaining endogenous cellular adhesion and signaling for direct transplantation to cartilage tissues applicable to a broad patient population.
Future Perspectives
Despite decades of research on tissue engineering and MSC chondrogenesis, current chondrogenic approaches are largely unable to reliably create stable hyaline-like cartilage in vitro that is directly transplantable in vivo to a broad patient population. However, cell sheet tissue engineering offers a unique scaffold-free platform to facilitate enhanced in vitro hyaline-like differentiation, to support direct in vivo transplantation to defects without biomaterials support. Combining cell sheet technology with allogeneic MSC sourcing, specifically for MSCs that have been screened for cell potency and differentiation capacity, should facilitate more rapid and reliable cartilage regeneration for a broader patient population.
Although hundreds of MSC-derived cell therapy clinical trials are ongoing, no MSCbased regenerative medicine applications have clinical validation for cartilage regeneration. While causes for failure with these MSC therapies are not fully understood, the current inability to properly control cellular interactions and cellular phenotypes in vitro to reliably yield stable hyaline-like cartilage, combined with poor tissue site engraftment and retention in vivo necessary to restore normal cartilage functional properties through mechanical and biochemical signaling, are central hypotheses. To improve upon cell-based and MSC therapies, specific considerations and attention must be paid to (1) selecting and validating appropriate cell sources, essential to regulatory and manufacturing challenges during translation, (2) the importance of three dimensionality in tissue-like structures and its role in inducing and maintaining 3D cellular interactions required for stable in vitro hyaline like chondrogenesis, (3) robust engraftment and integration of the transplanted construct with host tissue, and (4) the long-term stability of hyaline features in vivo without reversion to fibrocartilage. Focusing on these essential performance specifications will support progress in developing MSC-derived therapies that are both transplantable and phenotypically stable as hyaline-like cartilage to robustly regenerating hyaline articular cartilage at the site of articular cartilage defects.
Furthermore, future approaches may additionally enhance MSC chondrogenic potential and robust tissue regeneration and integration through the use of CRISPR or other gene editing techniques [91,[190][191][192] to bias MSCs using guided genetic instructions. Incorporating these modified allogeneic MSCs into established transplantable 3D cell sheets could yield even more robust hyaline-like tissues with greater regenerative potential, but will likely face greater regulatory scrutiny and manufacturing hurdles in their path to clinical approval [45,78,193,194]. | 9,004.8 | 2021-03-01T00:00:00.000 | [
"Biology",
"Engineering",
"Medicine"
] |
Oil Price Shocks, Durables Consumption, and China’s Real Business Cycle
Motivated by the facts that the sharp volatility in international oil prices has become one of the important external sources in driving China’s economic fluctuations, and in view of the strong correlation between oil and consumer durables, we build a real business cycle (RBC) model incorporating durable goods consumption in the context of oil price shocks. Using quarterly data on Chinese economy to conduct an empirical test, we examine China’s cycle characteristics of macroeconomic volatility and the transmission mechanism of oil price shocks. The study shows: 1) In the RBC model the consumption will be divided into durables and non-durables, which plays a crucial role in explaining Chinese economic fluctuations. The core of the model is to improve the forecast of consumption volatility and weak pro-cyclicality, which is closer to the actual economy; 2) Oil price shocks mainly affect consumption volatility, but seldom influence output, investment and labor, the three variables of which are largely influenced by technology shocks; 3) The model reveals that the transmission mechanism is determined by intra-temporal income effects and inter-temporal effects of portfolio rebalanced between durable goods and capital goods.
Introduction
Since its reform started in 1978, China's economy has sustained high growth about 8% -10% 1 , shoring up the demand for oil. As early as 2003, China has 1 The statistics come from the National Bureau of Statistics of China.
surpassed Japan to become the world's second largest oil consumer after the United States. With a sharp rise in the consumption scale, external dependence of crude oil is also rising. Since the year 2011, China has surpassed the US as the world's largest oil importer; in 2012, China's net oil imports accounted for 86% of the global growth increment; its dependence on foreign oil in 2014 reached 59.5% of its overall consumption. From the beginning of this century, international oil prices have gone up and down for more than 50% for three times. Take the recent market for example, since the second half of 2014, the British Brent crude oil prices fell more than 60% in less than seven months, second only to the financial crisis in 2008. The sharp volatility in international oil prices has become one of the important external sources in driving business cycle fluctuations in China. As a major energy and raw material in modern industry, oil price volatility influences a nation's macro-economy through a variety of channels [1].
According to the analytical framework of "shock-transmission mechanism" for business cycle theory, oil price shocks belong to the supply-type of real business cycle. Compared to the RBC theory with technology shocks of the supply type [2] [3] [4], China's RBC literatures based on the oil shocks are still in great short, and besides, it lacks studies on RBC models established on China's economic data in empirical testing and prediction. In an attempt to shed light on the above issues, this paper accounts for the impact factors in driving the cyclical pattern of China's economy in the sight of oil price shocks.
It should be emphasized that, in recent years, the driving forces for China's oil consumption growth not only come from industrialization and urbanization, but also from changes in the structure of consumer demand. The consumption structure of Chinese residents has gone from subsistence to well-off, and then upgraded to be the consumer. On one hand, the proportion of total food consumption is on decline, with the Engel coefficient of urban residents decreasing from 57.5% in 1978 to 35.0% in 2013, and rural residents from 67.7% to 37.7% (from China Statistical Yearbook, 2014). On the other hand, the types of consumer goods continue to be enriched and the quality continues to be improved, among which the most obvious sign is that the various durable goods of residents have continued to be on increase. Not only has the amount of color television sets, refrigerators and other traditional home appliances are on fast rise 2 , other newly developing household consumptions such as personal computers, mobile phones, sports cars and other entertainment equipment are significantly expanded 3 . Oil as a raw material is widely involved in the production of consumer durables sectors, but also used as input and fuel in durable goods. The consumption upgrading has led to the transformation of the industrial structure, 2 In between 1985-2012, the numbers of refrigerators and color television sets owned by every 100 urban residents rise from 17.2 and 6.6 sets to 136.1 and 98.5 units respectively, seven times and 14 times higher respectively; the growth in rural households is faster due to its poor condition. Data sources: WIND information. 3 Take personal computer for example, in 2012 the urban population per hundred units have 87.0 personal computers, every 100 rural households have 21.4 sets, nine times and 42.7 times than that of 2000 respectively. and boosted the demand for oil.
Distinctive from consumer non-durable goods (non-durables) consumption, durable goods (durables) have higher prices and long-term use for each time. In addition, durables consumption behavior is obviously different from other consumer behaviors. On one hand, to those durables that do not belong to the necessities of life, households can selectively consume according to income in different periods, so the intertemporal elasticity of substitution is much bigger than that of non-durables [5]; on the other hand, adjusting durables consumption faces higher costs and has "investment irreversibility", and for individual families, durables consumption is discrete and the purchase decision can be triggered more diversely. Moreover, generally speaking, volatility of durables consumption is much larger than the non-durables. The characteristics of durables itself, exhibit different features than other consumer goods in response to oil price shocks. For example, under the impact of rising oil prices, production costs of durables partially increase, affecting the corresponding demand and investment, but also might postpone people's purchase of durables, thereby reducing consumption [6].
Firstly, with regard to oil price shocks, there are a series of influential works in the field of oil price shocks based on RBC framework [7] [8] [9]. She believes that high oil prices are equivalent to the negative impact of technology, and with a reasonable relations between capital utilization rate and oil usage, oil price rise will reduce firms' capital utilization, which in turn will decrease investment and output, leading to a variety of consequences such as interest rates rise and rising inflation. Moreover, Rotemberg and Woodford (1996) examine the proposition in the context of the imperfectly competitive market, and conclude that imperfect competition is very important for understanding the effects of oil shocks on US economy [10]. To examine the energy impact on the business cycle mechanisms, Kim and Lougani (1992), Dhawan and Jeske (2007) introduce endogenously energy input to the production function in the RBC model by transferring the traditional "capital-labor" type production function into the "capital-labor-energy consumption" type production function [11] [12]. By developing a RBC model of open economy, Backus and Crucini (2000) show that volatility in oil prices is responsible for trade volatility of the most of countries in the world during the last twenty five years in 20th century [13]. Wu (2009) follows Finn (2000) by developing a RBC model in line with China's national conditions, in order to explore impacts on China's energy efficiency fluctuations [14]. The numerical simulation shows that endogenous capital utilization rate change plays a key role in China's energy efficiency fluctuations, which is similar with the conclusion made by Finn's study on the US economy. Moreover, focusing on the Chinese economy under the RBC framework, Sun and Jiang (2012) find the energy price shocks would lead to higher inflation, and have negative impact on economic growth in the short term, but is of short persistence [15].
Secondly, as to durables consumption, mainstream literatures can be divided into durables and non-durables and its impact of macro-economy. Durables re- 't quickly wear out, while non-durables are the opposite of du-rables, such as a short or one-time consumption of goods and services. The motivation for scholars to make such a distinction lies in that in one way, different pace of expenditure and intertemporal elasticity of substitution between two kinds of goods will affect the growth speed of the actual economy; in another way, durables are much sensitive to the economic policy, particularly the monetary policy, than the non-durables, which will lead to changes in policy transmission mechanism and optimal economic policy. Studies are represented by Ogaki and Reinhart (1998), Erceg and Levin (2006), and Monacelli (2009) [16] [17] [18]. For China's economy, Fan et al. (2007) focus on durables consumption of urban and rural residents by using CHNS micro-database, and proves that empirical results support the (S, s) model [19]. Also, Yin and Gan (2009) use CHNS data to study the impact of housing reforms on household durables consumption, and find that housing reform significantly increased durables consumption [20]. Zhao and Hsu (2012) follow the method proposed by Cooley and Prescott (1995) to estimate consumer durables for China, and find that durables consumption is much more volatile than output [21] [22]. Throughout these studies, it can be found that for Chinese economy, discussion of the oil price volatility and durables consumption are separated in the study of economic relations, either simply on the oil price impact on the economy, or just on the role of durables in the economy. There is little literature discussing the complementary nature of them. Moreover, those studies exploring the macroeconomic effects through dividing the consumer goods into durables and non-durables are focused on quantitative analysis, whereas the study of the impact of consumer durables in RBC framework is not yet involved. Meanwhile, if missing the reality features described by the rising durables consumption of urban and rural residents in China, such modelling may bring forth error of fitting and it is difficult to accurately capture the oil price impact on China's macroeconomic mechanisms. In addition, when establishing the RBC theoretical framework of oil economy, the existing literature is silent on using actual economic data to test whether the model really applies to China's cyclical properties.
In view of this, based on RBC framework, our work complements these studies by incorporating non-durables and durables to investigate the transmission mechanism of oil prices on the economy, and moreover shows the patterns of China's business cycle in the context of oil price shocks. Compared with the most existing studies, the contribution of the paper is three-fold: first, we conduct an RBC exercise using quarterly data rather than annual data through the abundant studies in RBC models talking about China's business cycle [23] [24] [25], so it can better fit the characteristics of Chinese pro-cyclical weak consumption; second, we contribute to the existing literature by incorporating consumer durables into RBC theoretical framework, different drastically from the approach based on survey data from empirical analysis to investigate China's economic fluctuations; third, we follow Dhawan and account the correlation between oil and durables through building the three elements in the household sector "non-durables-durables-oil" into two nested CES consumption function. Our result shows that it does a fairly good job in capturing China's real business cycle.
The remainder of this paper is organized as follows. Section 2 describes cyclical properties of oil price and other macro series in China; Section 3 presents the RBC model of oil economy, including durables and non-durables consumption. Section 4 conducts on calibration for parameters. Section 5 discusses model results. Section 6 concludes the paper.
Cyclical Properties of Oil Price and China's Economy
This paper uses data from the databases of CICE and WIND, with the choice of quarterly data and a time span from 1997Q1 (1997Q1 means the first quarter of 1997, the same below) to 2016Q1, a total of 77. China has officially compiled quarterly data since the 1990s, while from 1997 the National Bureau of Statistics of China (NBS) began to announce monthly or quarterly consumer durables data, goods such as the car/furniture/appliances/sports and entertainment products.
Durables data are crucial to the modeling and analysis of our work so the sample is selected from the beginning of 1997. In order to be consistent with the results of the DSGE theory below, seasonal adjustment of variables is made except oil prices and labor, and then all variables are in logarithms and have been de-trended with the HP filter.
According to CPI on a yearly basis and monthly series published by NBS, quarterly fixed base ratio based on 1997Q1 can be figured out, and quarterly GDP deflator is used to calculate the actual value of the relevant economic variables.
Firstly, the actual consumption is the outcome of the total quarterly retail sales of consumer goods divided by quarterly GDP deflator.
Secondly, unlike the US, Chinese consumption official statistics have not been carried out for durables and non-durables. With reference to the mainstream literature classification method and the availability of China data, four representative variables of durables, namely, the car/furniture/appliances/sports and entertainment products, can be divided by quarterly GDP deflator, and then comes out the actual durables investment.
Thirdly, since there is no quarterly or monthly private investment data in the statistics officially published, which is consistent with Wang and Zhu (2015), the domestic loans, self-financing, foreign investment, and other capitals will be taken as representative variables of the total funds for private investment [26]. If the variables are divided by quarterly GDP deflator, actual private investment (i.e. capital investment) is figured out.
Fourthly, the actual total investment is defined as the investment total of actual private investment and durables investment.
Fifthly, using "unit employees in total" as labor is applied in Huang (2005).
Sixthly, the actual GDP is nominal GDP divided by the quarterly GDP deflator. Seventhly, "retail: enterprises over the quota: petroleum and petroleum products" can be obtained as representative variables for household oil consumption.
If this series is further divided by the quarterly GDP deflator, the actual household oil consumption will be obtained.
Eighthly, the use of the West Texas Intermediate (WTI) crude oil spot prices 4 in which the monthly data using the geometric mean method become quarterly data, then converted to RMB price, and divided by quarterly GDP deflator to obtain actual oil prices.
From Figure 1, it can be seen that oil prices is obviously higher volatile than GDP. In particular, the standard deviation of GDP is 0.0376, whereas the stan- Second, one striking fact is that consumption is slightly pro-cyclical with a correlation of only 0.15, as opposed to China's strong pro-cyclicality derived from annual data produced by Rao and Liu (2014), as well as different from strong pro-cyclicality of the US data [27]. Actually, over the past 3 decades, China has achieved a remarkable growth primarily through investment and exports, whereas consumption has been always sluggish, and "high savings and low consumption" is China's important economic characteristics different from the US and other developed economies [28]. Therefore, it is believed that Chinese consumption is slightly pro-cyclical, which can be evidenced by the quarterly data. Other macro series are pro-cyclical, especially for household oil consumption, which is slightly pro-cyclical (0.04), indicating that there is a certain degree of rigidity in households' consumption of oil or its products (such as daily petrochemicals) which does not volatile significantly as incomes changes occur.
Modeling
Based on the canonical RBC model framework developed by Hansen (1985) and Cooley and Prescott (1995), this paper refers to the setting mode of Dhawan and Jeske (2007) model, thus set the production function into the form of "capital-oil-labor", which is a three elements double nested structure. The consumer goods of household sector are also divided into durables and non-durables in the utility function to build a DSGE model of oil economy containing both households and firms.
Households
The representative household consumption ( t C ) consists of durables ( t D ), oil and oil-products ( , h t O , hereinafter referred to oil) and non-durables ( t N ). Assume the double nested CES functional form is constituted by three elements: is the elasticity of substitution between the composite of oil and durables (defined as t F ) and non-durables, and is the elasticity of substitution between oil and durables. There is an accumulative process of consumption for durables and the operating mode is similar to the capital ( t K ) in the model, both belonging to the state variables.
The representative household utility function is as follows: where β denotes the discount factor, t L is the labor supply variable,η is the inverse of the elasticity of labor supply. The budget constraint for households is: [31].
The first order condition is obtained by solving the dynamic optimal choice problem of the representative household: where t λ is the Lagrange multiplier of budget constraint, 7), (8) and (9) are Euler equations of non-durables, durables and household oil consumption, which describe the optimal consumption choices of the household on these three goods.
(10) is the supply equation of labor and (11) is the Euler equation of capital. (12) and (13) characterize the optimal dynamic investment behaviors of capital and durables.
Firms
Same with the settings of household sector, the production function of firms is a double nested CES 5 functional form constituted by three elements: (14) and (15) in this paper) is in line with the actual situation of China. In addition, it is a fact that energy impacts and conducts other macroeconomic variables through the capital good market, so we choose the form of (K/E)/L [32].
O is the oil consumption of firms, t X is the composite of capital and oil ( similar to t F in household sector), t A is neutral technology shock, also known as the so-called total factor productivity (TFP) and its logarithmic form follows the stochastic process below: σ is the standard deviation of technology shocks. Also, We can derive the first order condition with respect to t L , t K and
Equilibrium Conditions and Model Solution
So far, we have characterized the optimal choices of the households and firms under constraints: the maximization of expected utility of households and the maximization of expected profits of firms, so the market clearing of the final good is: In recent years, oil coming from abroad has accounted for an increasing pro- we assume that volatility of oil prices in China depends on the international market, that is to say the oil price is completely exogenous and follows the ARMA (1, 1) process (see the parameter calibration part in the next chapter).
The final log-linearization is: Finally, by solving the log-linearized equations, optimal equilibrium path for each endogenous variable can be obtained:
Oil Prices and Technology Shocks
The purpose of this paper is to examine the relevance of oil prices and China's economy, thus how to determine the correlation coefficient of oil price shocks is particularly important. Through trial and error, it is found that ARMA (1, 1) model can fit the actual fluctuating trend of oil prices in the sample period, as seen in Figure 2. Specific estimation results are shown in Table 2 1) The discount factor β From 1997Q1-2016Q1, the average inflation growth rate on a quarter-to-quarter basis is 1%, so the quarterly discount factor is set at 0.99.
2) Capital depreciation rate δ k and durables depreciation rate δ d In the study of China's economic fluctuations in the literature, the average life span of China's fixed asset is mostly set at 10 years, the capital depreciation rate is 0.1, and the corresponding quarterly value is 0.025 [35]. Previous studies have not yet estimated the depreciation rate of durables, but Chinese scholars also include assets of durables in estimating fixed assets, and therefore might assume it is the same as the rate of capital depreciation.
3) Substitution parameters ρ c , ρ y , ρ F , ρ x Using data from the US and Japan respectively, Pakos (2011) show that elasticity of substitution between durables and non-durables is close to 1, i.e., 0 c ρ = [36]. Following Kim and Loungani (1992), elasticity of substitution between the production function of the composite of energy and capital and labor is set at 1, namely, 0 y ρ = . Using the US industrial data, Lee and Ni (2002) find that higher oil prices will not only reduce the supply of energy-intensive output, but also reduce the demand for durables such as cars, etc., which means oil products and durables are complementary in the actual economy, therefore, no-US at this stage may be of nearly three decades, so the value set also has certain rationality.
In conclusion, all deep parameters of RBC model are summarized in Table 3:
Model Results
Toolkit package containing Matlab source code by Uhlig (1999) is used to obtain cyclical characteristic information for each macroeconomic variable in "Second Table 3. Calibration of deep parameters. Tables 4-8. For comparison purposes, it is treated as follows: 1) RBC model with durable goods consumption is taken as a benchmark model in this paper, and there exist two shocks (oil price and technology) which are denoted as DRBC.
2) Model structure is the same with 1), but with only the oil price shocks, denoted as DRBC-OIL; the purpose is not to investigate changes in the technology, but only to discuss the impact on the business cycle of oil price shocks.
3) Model structure is the same with 1), but with only technology shocks, denoted as DRBC-TFP; the purpose is not to investigate changes in oil price, but will only discuss the impact on the business cycle of technology shocks.
4) It contains RBC model with a single consumption structure, namely, a simple RBC type model, which means consumption is not distinguished between durables and non-durables, and also with two shocks, denoted as SRBC. 5) Model structure is the same with 4), but with only the oil price shocks, denoted as SRBC-OIL.
Comparison with the Actual Economy
The predicted results of DRBC, DRBC-OIL and DRBC-TFP are shown in Table 4 & Table 5, and when compared with the actual economy, several findings on economic variables can be achieved as follows: From the point of volatility, the standard deviations of oil price and household oil consumption are 14.98% and 10.09% respectively, far greater than the volatility of output 3.75%, 4.00 times and 2.69 times of the output respectively; capital investment, total investment, durables investment are also higher than that of output, namely, 5.60%, 5.39% and 5.05%; output is ranked No. 6, and labor and consumption are low, with only 1.69% and 1.61%, of which consumption volatility the lowest reflects exactly the classical theory of household intertemporal smooth consumption behavior advocated by RBC 6 . Priorities of this volatility 6 The RBC consumption smooth theory actually supports the life cycle hypothesis by Modigliani & Brumberg (1954) and the permanent income hypothesis by Friedman (1957), which means that the individual resources available over the entire lifetimes are important determinants of consumption, and when encountered by wealth shocks in life, rational consumers will adjust their spending to prevent the whole greater volatility of consumption. completely coincide with the actual economy. From the point of view of K-P ratio, the output volatility of DRBC is close to data from China, with its output K-P ratio reaching 0.9973, indicating that the model accounts for 99.73% of the volatility of output in the data. Table 6 shows that output K-P ratio of DRBC-TFP reaches 113.56%, while the corresponding DRBC-OIL only 26.33%, indicating that the main source of output volatility is technology shocks rather than oil price shocks. This conclusion is relatively consistent with the study under the framework of classical RBC by scholars that suggests that the main source of volatility of China's output since reform and opening up in 1978 is technology shocks. Consumption volatility under DRBC is slightly lower than data from China, and the K-P ratio is 0.9253, which means that the artificial economy can account for 92.53% of the volatility of China's consumption; it is worth noting that the K-P ratio of consumption in DRBC-OIL is up to 90.80%, while the K-P ratio of consumption in the DRBC-TFP is only 25.86%, indicating that the main source of consumption volatility is oil price shocks rather than technology shocks, as opposed to the output. The K-P ratio of three investment variables of DRBC are close to 1, marking that the artificial economy captures three types of investments. Also shown in Table 5, similar with output, capital investment and total investment are mainly driven by technology shocks, and oil price shocks can account for 79.92% of the volatility of durables investment. It is slightly higher than 75.38% of technology shocks, because durables investment is actually the future consumption of durables to households, and the impact of oil price shocks is bigger than technology shocks on consumption. K-P ratio of household oil consumption in DRBC is 1.2676, signaling that the model accounts for 126.76% of the household oil consumption, which somewhat exaggerates the volatility of that, and it is further discovered that DRBC-TFP is only 6.28%, and DRBC-OIL is 126.26%, demonstrating that the oil price shocks can almost capture all the volatility of household oil consumption, which is also consistent with the intuitive logic. The labor K-P ratio of DRBC is 0.6898, and moreover, DRBC-OIL and DRBC-TFP are 25.71% and 73.88% respectively, indicating that the volatility of labor is mainly driven by technology shocks. Finally, the K-P ratio of oil price in DRBC is 0.7226, proving that the benchmark model can account for the oil price volatility of 72.26%.
From the point of view of the correlations with output, the model DRBC shows that all series are pro-cyclical, except for oil prices, which is weakly countercyclical. The DRBC predicts this dimension closely. In particular, the correlations of labor, capital investment, the total investment and output are up to 0.99, 0.98, and 0.89 respectively, higher than in the actual economy, showing a strong pro-cyclicality.
The correlation between durables investment and output is 0.63, slightly higher than 0.52 in the actual economy. As mentioned earlier, compared with the developed economies, China's consumption is weakly pro-cyclical, and DRBC better captures the feature, with the correlation between consumption and output in the artificial economy being 0.18, not far from 0.15 of the actual economy 7 . 7 We have searched for the main core economic journals for nearly a decade, and found in the literature of China's RBC theory study over 80% analysis shows a correlation between consumption and output above 0.8. China's consumption shows a strong pro-cyclicality, similar with the US economy. However, we believe this result is debatable, because the US is a "low savings and high consumption" economy, and their consumption is the largest engine in driving economic growth, once reached 70% of GDP, whereas China is a "low consumption and high savings" economy, and the consumption is below 40% of GDP for long term. Over the past three decades, the rapid growth is mainly drove by investment and exports of the "two carriages", therefore, so from the intuitive logic, China's consumption should be weak pro-cyclicality, different from a strong pro-cyclicality of the US economy. The main reason is probably largely based on the authors' estimation of annual data and analysis, in fact, the foundation work on RBC "Time to Build and Aggregate Fluctuations" written by Kydland and Prescott (1982), and besides, Hansen's work (1985) "Indivisible Labor and the Business Cycle" which has an important influence on RBC theory, are based on quarterly data for parameter calibration and "Second Moment" estimation (the former sample period is 1950Q1-1979Q2, the latter sample period is 1955Q3-1984Q1), which to some extent reflects the necessity and reasonableness of the paper in using quarterly data. are also more matching with the actual economy.
In summary, the DRBC model can accurately simulate the "Second Moment" feature about the actual economy, and can be used as an appropriate model to capture the volatility of China's economy.
Compared with SRBC Model That Does Not Consider Consumer Durables
Simulated results that do not consider the consumer durables of SRBC, SRBC-OIL, and SRBC-TFP are shown in Table 6 & Table 7. The most salient feature when comparing DRBC with SRBC is DRBC has improved consumption prediction and made the results closer to the actual economy and better. Starting with consumption comparison, volatilities in consumption prediction by DRBC (1.61%) are greater than that of SRBC (0.68%), closer to the actual economy (1.74%). Furthermore, with regard to the K-P ratio, the explanatory power of DRBC of 92.53% is much higher than SRBC of 39.08%; Eventually, SRBC shows a strong pro-cyclicality for consumption (a correlation between consumption and output is 0.84), which cannot fit a weak pro-cyclical consumption characteristic of China's actual economy, whereas DRBC can better fit the characteristics.
From the output comparison, SRBC predicts the standard deviation of output Table 8 shows the variance decomposition of technology and oil price shocks of DRBC in accounting for China's macroeconomic variables volatilities. In order to focus on the problem studied in this paper, only two exogenous shocks 8 are introduced in the model: one is technology shocks, the core of RBC theory, the 8 In another study based on the New Keynesian DSGE model, we introduce more exogenous shocks (ten exogenous shocks such as government spending, demand preference, labor supply, investment, etc.) in doing variance decomposition analysis, and the conclusions also show that the oil price shocks on the macroeconomic variables is similar with the conclusions drawn from the body of this paper. Figure 3 plots the responses of the main macroeconomic variables to one standard deviation of positive oil price and technology shocks. By the contemporaneous effect and the household's intertemporal budget constraint, rising oil prices lead to a negative income effect and households reduce durables, nondurables, and household oil consumption, thus consumption fall and the labor supply increases. Notice that the rise in oil prices has triggered a running track of the capital investment "first rise and then the fall", rather than an immediate decline of traditional RBC model. Specifically, the impact has led to increased investment in the first two periods, and the decline from the third period. Economic logic behind this is the following: investment and accumulation process of durables are entirely decided by households, and capital goods are jointly decided by households and firms (households determine capital supply, and firms 9 In fact, China's output is insensitive to oil price fluctuations, which is very similar with that of the US economy since 2000. According to Li (2008), in the rise of oil prices in the new century, the US economy shows "tolerance" to the continuous rising energy prices and "sustainability" of economic growth with high energy prices [42]. The reason may be that there is a strong complementary relationship between oil and durables consumption, and volatility in oil prices may be weakened by durables' stability, resulting in the weakening of oil transmission capacity in product markets, which has led to the weakening of the impact on output volatility. determine capital demand), therefore, it can be drawn that households need to rebalance its investment portfolio for durables and capital goods. According to the calibration study, in the initial steady state, the proportion between household oil and durables (O h /D = 0.013) is much larger than the ratio of oil to capital in production (O f /K = 0.003). The decline in marginal revenue of durables caused by oil price shocks is higher than the marginal revenue decline of capital goods. In order to balance the marginal income differences, households will immediately rebalance the portfolio, increase capital goods while reducing durable goods, and this capital increase will sufficiently offset the decrease in firms' demand of capital investment brought by high oil prices; meanwhile the ARMA (1, 1) of the oil price shocks determines that the propagation of oil price shocks is characterized by two periods in the time dimension, which together leads to a two-periods increase in capital investments (i.e., greater than zero). Then starting from the third period, capital investment is switched into a negative trend, mainly because the high capital stocks t K and low durables stocks t D in the initial period have led to the fact that the portfolio rebalanced behavior of households cannot fundamentally reverse the huge gaps between those two, therefore, produce subsequent negative trends for two types of investment. The rise of capital and labor has increased the production, but brought forth the decline in the value added t VA because the decline in durables investment and non-durables is greater than the short-term increase in the magnitude of capital investment.
Impulse Response Analysis
Y. Q. Wang et al.
Unlike oil price shocks, technology shocks have a direct impact on production function, but do not enter the utility function and do not directly influence durables investment. Therefore technology shocks do not affect the two portfolio reallocation by households, just leading to the rise in capital investment. Since our purpose is to study the impact of oil prices, and the impact mechanism of technology on the economy has been extensively studied in a large number of RBC literatures, it will not be discussed here due to limited space.
Conclusions
Existing literatures on China's RBC focus on the impact of macroeconomic cycle brought by technology, finance, monetary, international credit, and sunspot shocks, but lack discussion on energy price shocks represented by oil, and ignore the fact that the international oil price volatility in recent years is one source of Second, the oil price shocks mainly affect consumption volatility, but seldom influence output, investment and labor, the three variables of which are largely influenced by technology shocks. Specifically, the K-P ratio of consumption in DRBC-OIL is up to 90.80%, while K-P ratio of that in DRBC-TFP is only 25.86%; the K-P ratios of output, investment and labor in DRBC-TFP are 113.56%, 109.61%, and 73.88% respectively, while in DRBC-OIL the corresponding ratios are 26.33%, 11.64%, and 25.71% respectively.
Third, the benchmark model (DRBC) reveals that the transmission mechanism of oil prices is determined by intra-temporal income effects and inter-temporal effects of portfolio rebalanced between durable goods and capital goods.
It is implicated that the impact of oil price shocks on China's output volatility may not be so big, but the main impact is on consumption. Expanding domestic demand and boosting consumption become the main tone in the future of China's economic transition and growth; therefore great importance should be at-Y. Q. Wang et al. | 8,036 | 2019-04-08T00:00:00.000 | [
"Economics"
] |
Polarization Correlation of Entangled Photons Derived Without Using Non-local Interactions
Entangled photons leaving parametric down-conversion sources exhibit a pronounced polarization correlation. The data violate Bell's inequality thus proving that local realistic theories cannot explain the correlation results. Therefore, many physicists are convinced that the correlation can only be brought about by non-local interactions. Some of them even assume that instantaneous influences at a distance are at work. Actually, assuming a strict phase correlation of the photons at the source the observed polarization correlation can be deduced from wave optical considerations. The correlation has its origin in the phase coupling of circularly polarized wave packets leaving the fluorescence photon source simultaneously. The enlargement of the distances between photon source and observers does not alter the correlation if the polarization status of the wave packets accompanying the photons is not changed on their way from the source to the observers. At least with respect to the polarization correlation of entangled photons the principle of locality remains valid.
INTRODUCTION
In 1935 Einstein et al. [1] initiated a discussion whether quantum mechanics is complete or not. In the following years one could not find concrete hints for the occurrence of hidden variables. In 1964 Bell [2] showed on the basis of two spin 1/2 particles that local realistic theories can principally not reproduce the results of quantum mechanics. In 1969 Clauser et al. [3] proposed an experiment to test local hidden variable theories with entangled photons. Already 3 years later Freedman and Clauser presented first measurements proving that local realistic theories were not able to describe the experimental results [4].
All experiments providing polarization correlation data with good statistics are performed in such a way that the detection processes of two distant observers are spacelikely separated. Thus, the publications on these experiments generally suggest that the results can only be induced by superluminal signals between the observers. Especially Salart et al. [9] emphasize that the violation of Bell's inequality seems to prove that quantum mechanics make use of non-local interactions.
Discrepancies between the results of local realistic theories and quantum mechanics are also discussed for more complicated quantum systems with more than two particles [13]. Many of these publications insinuate that faster-than-light communication might be possible. The drawback of all these attempts to prove the occurrence of non-local interactions is that until now no concrete results could be presented which reproduce the experimental findings.
In the last few years several recognized physicists try to prove that quantum mechanics does not use non-local interactions [14][15][16][17][18][19][20][21]. The authors show that some mathematical operations like the reduction of a quantum state seem to have non-local consequences. On closer examination these operations only cause changes of the observer's knowledge on the quantum state. The changes thus do not take place in physical space but merely in information space.
In fact, the results of the experiments with parametric down-conversion photon sources can be derived from wave optical and quantum statistical considerations without using superluminal signals. There are good arguments to assume that the experiments of Aspect and coworkers with entangled photons emerging from a specific decay cascade of calcium [5,6] can also be explained without using non-local interactions. However, additional tests on the polarization status of the photons would be helpful in order to conclusively answer the question.
PHOTON PAIRS ARISING FROM DOWN-CONVERSION SOURCES
In the last 22 years several polarization correlation experiments with parametric down-conversion sources have been performed [7][8][9][10][11][12]. If necessary experimental details are taken from the doctor thesis of Weihs [22]. In a BBO crystal ultraviolet photons are converted into two phase coupled circularly polarized green photons with equal energies.
The circularly polarized wave packets are immediately decomposed into two linearly polarized wave packets with orthogonal polarization directions. The ordinary beam is vertically polarized. The extraordinary beam is horizontally polarized. Due to the different propagation directions the emission cones of ordinary and extraordinary beam appear on the exit plane as two off-centered circles which intersect each other at two points (see Figure 1). After traversing a compensation plate the reassembled circularly polarized wave packets leave nearly unchanged the two intersection zones.
In the polarization correlation experiments with parametric down-conversion sources only the so-called singletconfiguration has been studied. In this configuration the polarization planes of associated photons rotate in the same direction. In statistical average about one half of the photon pairs rotate clockwisely, the other half counterclockwisely.
DETECTION OF POLARIZED PHOTONS BY ALICE AND BOB
Photons emerging from the two exit sites of the source are guided by optical fibers to the observers. After leaving the optical fibers the wave packets traverse an electro-optical modulator arranged between two suitably oriented quarter-wave plates. In combination the three optically active elements twist linearly polarized waves by an arbitrarily choosable angle proportional to the applied voltage. The detector unit is fixed in space. The twist of the plane waves by the electro-optical modulator simulates a virtual twist of the detector unit. For the sake of convenience it will be assumed in the following that the twisting units are omitted and that the detectors are really twisted in space.
By the use of Wollaston prisms Alice and Bob split the incoming wave packets into two equally large components with orthogonal polarization directions. The linearly polarized components hit altogether four detectors which should be highly sensitive in order to detect nearly all incoming photons [11,12]. When the apparatus is thoroughly adjusted the count rates of the detectors should no longer depend on the polarization direction.
In the four detector channels each registered pulse is saved together with an individual time stamp. After having finished the measurement the four data lists are compared in order to determine four coincidence rates namely I(α, β), I(α, β + 90 • ), I(α + 90 • , β), and I(α + 90 • , β + 90 • ). Let I 0 be the coincidence rate when the selecting filters are removed on both sides of the experiment. If the losses in the filters are negligible I 0 is also the coincidence rate summed up in the four channels. The two coincidence rates I(α, β) and I(α, β + 90 • ) add up to I 0 /2. The same is true for the coincidence rates I(α + 90 • , β) and I(α + 90 • , β + 90 • ). Thereby one has to bear in mind that coincidence rates exhibit statistical uncertainties.
In this article particle as well as wave aspects will be addressed because the correlation of photons detected by Alice and Bob depends on the relative phase of the circularly polarized wave packets accompanying the photons. The derivation of the polarization correlation is mainly based on wave arguments but if necessary particle aspects will also be considered.
The terms "wave" and "light" are often used for convenience. In fact, a light beam will always be understood as a stream of independent wave packets with limited coherence length. Only wave packet pairs incorporating entangled photon pairs are strictly phase coupled when they leave the photon source. In the experiment of Weihs [22, p. 63] the coherence length has been estimated to be about 0.1 m. Thus, the wave packets leaving the photon source are very short in comparison with the distance between Alice and Bob thus precluding wave based non-local interactions between the observers.
FORMAL DERIVATION OF THE POLARIZATION CORRELATION
In wave optics and quantum mechanics one often asks for the phase relation of interfering waves in the detection plane in order to get the interference pattern. In correlation experiments, however, one has to ask for the phase relation of two associated wave packets at the source. The relative phase at the source manifests itself in the overlap integral of the two normalized wave packets.
The two wave packets simultaneously leaving outputs A and B have a phase shift of ± 90 • at the source. The sign reveals which of the wave packets is leading. In Figure 1 the phase shift is indicated by twisted rotation vectors. If α = β an additional phase shift of ± (α − β) has to be taken into account. The sign depends on the rotational direction of the two circularly polarized wave packets. Thus, the total phase shift of the two linearly polarized partial waves looked for by the two observers is Neglecting the envelope function one has to evaluate the overlap integral of the two normalized functions The second function divided by the normalizing factor can be converted by using trigonometrical addition theorems twice By using the definite integrals one can easily calculate the overlap integral The (absolute) square of the overlap integral of the two normalized phase coupled wave packets is proportional to the coincidence rate. As has been explained in the previous chapter the coincidence rates I(α, β) and I(α, β + 90 • ) add up to I 0 /2. Therefore, the proportionality factor must be I 0 /2. Thus the coincidence rate is given by and the correlation is given by With this rather simple consideration the experimentally found correlations of entangled photons have been fully reproduced.
WORKING OUT QUANTUM STATISTICAL ASPECTS
Quantum statistics will become much clearer if each of the two circularly polarized light beams A and B leaving the source is formally splitted into two commensurate linearly polarized beams with orthogonal polarization directions. A circularly polarized wave can always be understood as the superposition of two equally sized linearly polarized partial waves with orthogonal polarization directions. The two partial waves are phase shifted with respect to each other by ± 90 • . The orientations of the linear polarizations ϑ and ϑ + 90 • can be freely chosen. The photons contained in the two partial beams form two disjunct groups. If a photon has been assigned to a linearly polarized partial beam it will always stay in that beam. There is no intermixing between the two photon groups on their way from the source to the observers even if the photons and the accompanying wave packets traverse electro-optical modulators and quarter-wave plates.
All modern experiments are planned with the aim that selection and detection processes carried out by the two observers are spacelikely separated. Therefore, the splitting is performed just in front of the detectors. The rather late fixing of the angles α and β even concerns photons leaving the source much earlier. Thus, the splitting of the circularly polarized beams admittedly needs non-local information but certainly no nonlocal interaction because the two streams of photons propagating toward Alice and Bob are not modified by the repeated change of the detection angles. Before the photons reach the associated Wollaston prism the splitting procedure is a purely mathematical but not a physical process.
Due to their common origin entangled photon pairs are phase coupled when they leave the source. In case of parametric downconversion processes the two entangled photons are in phase but the two associated circularly polarized wave packets are phase shifted by ± 90 • .
As the optical pathes from the source to Alice and Bob will generally not be balanced the initial phase information cannot be recovered by simply comparing the arrival times of the entangled photons. This would merely be impossible due to the limited time resolution of external clocks and to the jitter of the detection electronics.
Fortunately the two beams are equipped with synchronized internal clocks which can be easily read off by the observers. Within one wave cycle the polarization plane performs a full turn. Thus, the relative phase of the photons at the source up to multiples of 180 • can be recovered from the difference of the polarization angles looked for by the two observers. The modulo 180 • term comes from the 180 • periodicity of the polarizer's transmittance.
The polarization correlation with due regard to the particle aspect will be derived in two steps. At first the case α = β will be discussed. This step covers the crucial point in the line of arguments explaining why the entangled photons are statistically distributed to only two of the four possible coincidence channels.
The two partial beams A(α) and B(α + 90 • ) are in phase (or opposite in phase) at the source. The same is true for the partial beams A(α + 90 • ) and B(α). As the photons are in phase at the source they must be found either in the coincidence channel A(α)/B(α + 90 • ) or in the coincidence channel A(α + 90 • )/B(α). As the two coincidence channels are equivalent the probabilities to find the entangled photon pairs in these two coincidence channels must be equal.
In contrast, the partial beams A(α) and B(α) are phase shifted at the source by ± 90 • . That means they are orthogonal to each other. The same is true for the partial beams A(α + 90 • ) and B(α + 90 • ). Therefore, there will be no coincidences in these two coincidence channels.
The considerations above prove that the two entangled photons are both contained either in the partial wave pair A(α) and B(α + 90 • ) or in the partial wave pair A(α + 90 • ) and B(α). Whether the photon is detected by detector A(α) or by detector A(α + 90 • ) is purely accidental. One cannot predict which detector will be hit by individual photons. However, after the detection of the first photon of a photon pair for example on Alice's side it will be clear which one of the two detectors on Bob's side will be hit by the second photon.
Only the anti-correlation of entangled photons is predefined but not the polarization of individual photons [23]. This is why the polarization direction should not be thought of as an element of reality.
The phase relation of partial beams at the source thus leads to the strong polarization correlation although the information on the polarization status is not a hidden property of the photons. Einstein et al. [1] had claimed that a property equally found in two no longer interacting quantum states must be an element of reality. The pronounced polarization correlation of entangled photons seems to be a counterexample.
The wrong estimate of Einstein and his coworkers has entailed the erroneous approach of Bell [2] who assumed that the polarization directions are real properties of the photons. In fact, the phase coupling only predefines the interrelationship but not the property itself. In consequence Bell's inequalities are irrelevant.
The extension of the consideration to the case α = β is rather trivial and exclusively rests on an optical law discovered by Etienne Louis Malus in 1810. Malus' law says: If light linearly polarized in direction γ traverses a polarization filter with its polarization axis oriented in direction δ its intensity is reduced by the factor cos 2 (γ − δ).
One cannot predict which one of the photons will traverse the polarization filter because Malus' law has a purely statistical character. The law is valid not only for light leaving a classical light source but also for laser light. That means it does not depend on second-order coherence properties of a photon stream. It is also experimentally proven in case of low intensity when the beam intensity is measured by single photon detectors. Brukner and Zeilinger explicitly show that Malus' law is also valid in the quantum regime [24]. In one of his recent publications Khrennikov has also used Malus' law when he derived the polarization correlation of entangled photons starting from quantum mechanical considerations [16, p. 3].
The first of Equation (8) means that if one of the entangled photons has been recorded by detector A(α) the associated photon will certainly be contained in the partial beam B(α +90 • ). Therefore, one has to apply Malus' law for γ = α + 90 • and δ = β. That means the coincidence rate I 0 /2 is reduced by the factor cos 2 (α+90 • −β) = sin 2 (α−β). Therewith the coincidence rate I(α, β) is given by in accordance with Equation (6). The role of Alice and Bob can be exchanged. If the circularly polarized beams are splitted into partial beams linearly polarized Frontiers in Physics | www.frontiersin.org in the directions β and β + 90 • the results presented above will be reproduced.
For α = β Malus' law with its inherently statistical character has to be applied on Alice's or on Bob's side. In this case the correlation C(α, β) is larger than zero and smaller than unity. Thus, the correlation is not defined for a single pair of entangled photons but only for a sufficiently large group of entangled photon pairs.
As has been proven above the piece of information responsible for the emergence of the pronounced correlation is the phase shift of two associated wave packets when they leave the source. Traditionally quantum mechanics strictly takes into account phase differences of wave functions contained in a matrix element. Therefore, it can be assumed for sure that the phase difference of the two entangled photons will also be considered in quantum mechanics.
It is not relevant whether the correlation problem is handled classically or quantum mechanically. It is only relevant whether the phase information is used or not.
The calculations based on local realistic theories do not consider phase relations. They only try to reproduce the polarization correlation by assuming that the polarization directions of the entangled photons are encoded in the photons as hidden variables. In explaining the strong polarization correlation of entangled photons only their relative phase at the source is relevant.
GENERAL REMARKS
The pronounced correlation of entangled photons is neither superprising nor mysterious. It solely depends on the initial phase shift of the circularly polarized waves accompanying the entangled photons. One only has to make sure that the polarization directions α and β looked for by the two observers are associated with corresponding polarization angles at the source. This condition is fulfilled in each of the experiments. Hereby it is not relevant at what time the polarization directions have been chosen. The purely conceptual splitting of the two partial beams and the detection of the photons have no effect on the parametric down-conversion process. The relative phase of the entangled photons has been fixed inside the source. The observers only decide which polarization directions they look for. There is no need for a superluminal information transfer between the observers. The distance between the observers is absolutely irrelevant.
The relative phase of entangled photons at the source could be declared to be a hidden variable finally revealed by the coincidence detection process. Hidden variables of this type can only be associated with wave packets but not with particles. The decisive point of the argumentation is that the wave intensity and thus also the coincidence rate is proportional to the (absolute) square of the scattering amplitude. Properties are only manifested after squaring the overlap integral. In particle based considerations properties directly act upon counting rates.
Bell's inequality is misleading because it attributes properties like polarization directions to particles and not to waves. Therefore, Bell cannot take into account phase differences of entangled photons. In future one should ignore violations of Bell's theorem because Bell's considerations are not adequate to describe wave phenomena.
CORRELATION OF PHOTON PAIRS IN TRIPLET CONFIGURATION
A pronounced correlation of entangled photons should also be observable in triplet configuration. That means that the two circularly polarized waves are rotating in opposite directions. In this case the correlation cannot be derived as easily as in the singlet case. One can figure out that the triplet configuration arises from the singlet configuration by mirroring one of the circularly polarized waves at a vertical plane. This can be performed by a half-wave plate with the optical axis oriented in vertical direction. If the circularly polarized wave packets are phase shifted by ± 90 • the correlation should be Thereby the origins of the angles α and β have to lie in the vertical plane. Preliminary measurements of Weihs [22, p. 72] support this result. For example if the two observers both look for polarization directions parallel to 45 • the coincidence rate is at a maximum.
In a former publication [25] the sign in the correlation equation for the triplet configuration was minus instead of plus. The sign change has to do with the fact that Bob's coordinate system was left-handed in the previous article. In the consideration above both coordinate systems are right-handed.
PROPERTIES OF PHOTON PAIRS ARISING FROM ATOMIC SOURCES
In the experiments with parametric down-conversion sources the two circularly polarized wave packets are phase shifted by ± 90 • leading to a strict anticorrelation of the linear polarizations. In contrast, in the experiments of Aspect et al. [5,6] the two circularly polarized wave packets are in phase or opposite in phase. Therefore, the correlation is given by
DOES IT HELP TO POSTULATE NON-LOCAL INTERACTIONS?
Is it really helpful to postulate a novel interaction which is in serious conflict with special relativity? Postulating an information transfer faster than light entails a wealth of new problems. An instantaneous influence at a distance requires that simultaneity can be strictly defined for distant locations in contrast to corresponding assertions of special relativity. Even if such principal objections are ignored many practical problems arise. How could such a postulated interaction generate correct results? In correlation experiments the ratio of coincidence rates in two complementary channels I(α, β) and I(α, β + 90 • ) has to be precisely defined. The newly postulated interaction has to redirect a wellspecified percentage of stochastically arriving photons from one channel to the other one. The expected ratio of coincidences in the two channels depends on the difference of the polarization directions α and β? How does the postulated interaction get the information on the angles? In the experiments the twisting angles α and β are generated by applying voltages to electro-optical modulators. How could any theory whatsoever associate a voltage to an angle? The proportionality factor depends on the material, on the orientation of the crystal axis and numerous other experimental details.
Actually, in the optical fibers spurious birefringent effects occur which are manually compensated. How can the postulated new interaction know whether the apparatus is well-adjusted or not? By the way all the twisting processes are frequency dependent. Only light composed of photons like those used in the experiment can gain the information on the adjustment status and on the angles α and β.
The experiment of Salart et al. [9, p. 863] shows that the postulated "spooky" interaction must be at least 50,000 times faster than the speed of light. If the lengthes of the optical fibers differ distinctly from each other the superluminal signal has to wait quite a long, but an extremely welldefined time interval before it redirects individual pulses from one output to the other one. It will be extremely difficult to embed such a delayed reaction in a serious physical theory.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication. | 5,426.4 | 2020-05-19T00:00:00.000 | [
"Physics"
] |
Machine learning to identify chronic cough from administrative claims data
Accurate identification of patient populations is an essential component of clinical research, especially for medical conditions such as chronic cough that are inconsistently defined and diagnosed. We aimed to develop and compare machine learning models to identify chronic cough from medical and pharmacy claims data. In this retrospective observational study, we compared 3 machine learning algorithms based on XG Boost, logistic regression, and neural network approaches using a large claims and electronic health record database. Of the 327,423 patients who met the study criteria, 4,818 had chronic cough based on linked claims–electronic health record data. The XG Boost model showed the best performance, achieving a Receiver-Operator Characteristic Area Under the Curve (ROC-AUC) of 0.916. We selected a cutoff that favors a high positive predictive value (PPV) to minimize false positives, resulting in a sensitivity, specificity, PPV, and negative predictive value of 18.0%, 99.6%, 38.7%, and 98.8%, respectively on the held-out testing set (n = 82,262). Logistic regression and neural network models achieved slightly lower ROC-AUCs of 0.907 and 0.838, respectively. The XG Boost and logistic regression models maintained their robust performance in subgroups of individuals with higher rates of chronic cough. Machine learning algorithms are one way of identifying conditions that are not coded in medical records, and can help identify individuals with chronic cough from claims data with a high degree of classification value.
disease or cessation of prescription medicines (e.g., angiotensin-converting enzyme [ACE] inhibitors) that can cause cough 14,15 .However, a cause and/or successful treatment cannot be identified for up to half of individuals with chronic cough 12,16,17 .Chronic cough is associated with high rates of health care resource use, and individuals with unexplained or treatment-resistant chronic cough often see multiple specialists and undergo extensive diagnostic testing [18][19][20][21][22][23][24] .Patients generally report poor success rates with prescription drugs and other medical approaches; over-the-counter or home remedies for cough are commonly used instead of or in addition to prescription options 21,22,25 .Many individuals report giving up on seeking further medical attention for their chronic cough due to lack of previous success 21,22 .
There is no FDA-approved treatment that is specific to chronic cough, which has generally been regarded as a symptom rather than a distinct clinical entity.In addition, at the time of this study there was also no diagnosis code for chronic cough.Many patients receive other diagnoses before their chronic cough is properly addressed, which impedes their treatment as well as research efforts to characterize and address the unmet diagnostic and therapeutic needs of this population 26,27 .
We and others have recently reported the development and validation of natural language processing (NLP) algorithms that can identify cough mentions from provider notes in patients' electronic health records (EHRs) 18,27,28 .Our NLP-based algorithm had a positive predictive value (PPV) of 0.96 for identification of cough mentions, as compared to a manually annotated gold standard data set 18 .This algorithm defines chronic cough as the presence of at least 3 cough encounters within a 120-day period, with at least 56 days between the first and last encounters.The 120-days period as a maximum "gap" increases the likelihood that the 3 cough encounters shared a common etiology.This rule-based algorithm heavily relies on the presence of clinical notes.Without clinical notes, the algorithm identified just 15.9% of chronic cough cases 18 .Additionally, clinical notes are only available in smaller EHR databases due to costs of de-identifying such data and privacy concerns.In contrast, many administrative claims database cover large proportions of the population.The objective of the current study was to augment this previous work by developing machine learning algorithms to identify and characterize individuals with chronic cough from medical and pharmacy claims data.
Participants
The final sample was identical to that reported previously and comprised 327,423 individuals 18-85 years of age with at least 24 months of claims data and no evidence of ACE inhibitor use (Fig. 1) 18 .The gold standard EHRbased algorithm identified a total of 128,467 individuals with ≥ 1 cough encounter and 4,818 (1.5%) who met the criteria for chronic cough 18 .Among the gold standard positive class for chronic cough, 66.7% were female and the mean (SD) age was 61.0 (15.3) 18 .
Performance of 3 claims-based chronic cough identification models
Performance metrics for 3 claims-based identification models for chronic cough (XG Boost, logistic regression, and neural network approaches) are summarized in Table 1.All 3 models had high specificity (0.974-0.996) and negative predictive value (NPV; 0.985-0.988).PPV (also termed precision) was 0.134 for the XG Boost-based model, 0.344 for the logistic regression model, and 0.218 for the neural network-based model.Sensitivity (recall) was low (0.153-0.207) across all 3 models.
Receiver-operating characteristic (ROC) and precision-recall (PR) curves were plotted for each model (Fig. 2).The former plots represent the tradeoff between sensitivity and specificity, while the latter visualize the tradeoff between sensitivity rate and PPV.The XG Boost-based model produced the highest area under the curve (AUC) values for both ROC and PR, while the neural network-based model had the lowest values for both metrics.
Feature importance for the first 50 variables for the XGBoost model and the logistic model have been reported in supplementary Tables.The most important identification features for the XG Boost-based model were having a chest X-ray (Current Procedural Terminology [CPT] code 71,020), count of cough diagnoses (ICD-10 R05), a prescription for albuterol, a visit with a pulmonary specialist, and an office visit (CPT 99,215).For the logistic regression model, the most important identification features were ≥ 1 cough symptom (ICD-10 R05; coefficient = 1.012), ≥ 3 cough diagnoses (ICD-10 R05; coefficient = 0.820), a pulmonary specialist encounter (coefficient = 0.521), ≥ 3 home visits for nursing care (Healthcare Common Procedure Coding System code S9123; coefficient = 0.465), and residing in the Midwest region of the US (Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, or Wisconsin; coefficient = -0.461).
Chronic cough is consistently reported to be more common in women than in men, and to predominantly affect individuals > 50-60 years of age 18,19,[29][30][31] .We therefore also assessed the performance of the 3 models with 3 subpopulations found to have a higher prevalence of chronic cough; females (chronic cough prevalence of 1.65% in our sample), individuals ≥ 65 years of age (prevalence 2.77%), and individuals with a diagnosis of cough (prevalence 5.39%).The most favorable performance metrics were produced by the logistic regression model for individuals diagnosed with cough, and by the XG Boost model for females and those ≥ 65 years of age (Table 2).
Discussion
In this study we developed and compared 3 claims-based identification models for prevalent chronic cough, based on XG Boost, logistic regression, and neural network machine learning approaches.The rationale was to explore 3 different models with varying range of complexity/flexibility to help understand variation in their performance.Logistic regression is a simple model with good interpretability.On the other hand, neural network models offer the most flexibility with the least interpretability.XGBoost is a tree-based model like Random Forests and offers a compromise between interpretability and flexibility.These 3 models were selected because they are commonly used models which most data scientists are familiar with.All models had high specificity and NPV indicating that individuals without chronic cough were identified well.Comparatively lower PPV (precision) and sensitivity (recall) were observed due to a very small proportion of the positive class in the sample population (1.5% observed to have chronic cough).Considering this imbalance, logistic regression model was able to achieve a 34% PPV from the full study sample of medical plan enrollees.The most common metric used to measure classification performance is ROC-AUC, using which XG Boost had the best performance, closely followed by logistic regression model.
We also applied each model to 3 subsamples with a higher prevalence of chronic cough compared to the overall sample.The relative performance of the 3 models differed across these subsamples, with the logistic regression model having the most favorable ROC-AUC and PR-AUC metrics of the 3 when applied to individuals diagnosed with cough, and the XG Boost-based model performing best in the female and ≥ 65 years of age subsamples.
The 3 models had distinct but overlapping sets of most important features for chronic cough identification.Seeing a pulmonary specialist and having a higher number of cough diagnoses were important features in both www.nature.com/scientificreports/ the XG Boost and the logistic regression models.One of the drawbacks of neural network models is that they are a "black box" method and do not enable identification of the variables driving the algorithm; research to develop reporting methods to address this issue is ongoing 32 .
To our knowledge, this is the first study to use machine learning approaches to identify individuals with chronic cough based solely on medical and pharmacy claims data.The study sample had an estimated chronic cough prevalence of 1.5%, which is lower than recent estimates of ~ 5% from general population surveys in both the UK and US (although similar to an estimate of 1.04% from our previous work that developed an NLP algorithm for chronic cough and deployed it in a different sample population) 19,23,27 .We hypothesize that the NLP rule-based algorithm that formed the chronic cough population in this study missed many individuals with the condition, as seen by the low prevalence compared to previous population prevalence estimates; though evaluating the sensitivity of the rule-based algorithm, either with chart review or surveys, was not performed in this work.
We had expected that the most prominent features in our models would relate to known risk factors for chronic cough such as smoking, bronchitis, asthma, gastroesophageal reflux disease, and COPD.Some of these conditions were selected in our models and were included among the top 50 most prominent features, but they were not consistently within the top 10 most important features.For example, a diagnosis of 'other COPD' was the seventh most important feature in the XG Boost-based model, while 'acute bronchitis' ranked seventeenth.Smoking was not a prominent feature for the XG Boost-based and logistic regression models.However, diagnosis and other codes for smoking or smoking cessation are generally not well populated in claims data 33 .Incentives for US providers to record smoking-related codes were introduced in 2010 as part of the 'meaningful use of certified electronic health record' policy, and have been reported to increase the sensitivity of claims-based approaches to identifying smoking status 34 .Smoking might therefore become a more important feature in future iterations of machine learning models to predict chronic cough and other smoking-related conditions.In general, it is likely that the most important predictive features of models designed to detect individuals at high risk of developing chronic cough would differ from those observed in our models, which were designed to identify individuals with current chronic cough.
A strength of the current study is that the databases we used to train, validate, and test our models included full medical and pharmacy claims data for individuals representing ~ 19% of the US commercially insured population, ~ 21% of the Medicare Advantage population, and ~ 22% of the Medicare Part D population.As such, our sample is more nationally representative of the US population than any other single provider system sample.Since different clinical claims databases contain different data types, our comparative analysis of the most prominent features for 3 machine learning models could aid in the development of guidelines for optimal model selection customized to the characteristics of individual data sets.Similarly, our comparative subsample analysis could help to guide optimal model selection for populations with different chronic cough risk profiles.
The neural network model achieved the least favorable performance of the 3 algorithms in identifying individuals with chronic cough in the study population and subgroups.This may be due to the small sample size and the relatively limited information in structured claims data.Deep learning models may be more advantageous when the source data includes free-form text or images, such as clinical notes or radiology images.
One inherent limitation of this study was the under-coding of cough diagnoses in the structured claims data.. Future studies will aim to examine and validate the performance of the machine learning models developed in this study when incorporating the new ICD-10-CM code of R05.3 for chronic cough 35 .Another limitation is that our data set included records from a network of > 140,000 providers and thus may include significant heterogeneity in how cough diagnoses and claims are recorded.In addition, medical claims data are subject to coding errors 1 .Further, diagnosis codes do not always indicate definite disease presence and in some cases may be used to rule diagnoses out, while prescription claims or written orders do not necessarily indicate that a medication was taken as prescribed.Also, given the low prevalence of the positive class, we could not stratify and test how well the model performed across different geographic locations.Table 2. Performance of predictive models of chronic cough in subpopulations with a higher prevalence of chronic cough.PR-AUC precision-recall, area under the curve; ROC-AUC receiver-operating characteristic, area under the curve.
Conclusions
Our findings can be used in payer and provider systems to identify individuals with chronic cough who may benefit from further diagnostic testing and treatment, and to identify representative populations of individuals with chronic cough to aid in clinical research.However, more work is needed to improve the PPV and sensitivity of identification models for chronic cough.Overall, we suggest that logistic regression be prioritized for future model development work on administrative claims data, due to its robust performance in the overall sample and in subsamples of individuals at higher risk for chronic cough, as well as its ease of use.Additionally, further work needs to be done to better understand use of different machine learning models in identification of chronic cough patients.
Study design
This was a retrospective observational study.All data and databases used in this study were statistically deidentified, and all study procedures were compliant with the United States Health Insurance Portability and Accountability Act.The study therefore did not require Institutional Review Board approval or informed consent.
Study sample
The The data used in this study were from 01 January 2016 through 31 March 2019.Inclusion and exclusion criteria were as described in Bali et al. 18 .Briefly, eligible participants were enrolled in a national commercial or Medicare Advantage medical and pharmacy plan in the Optum Research Clinical Database between January 2016 and March 2017, with their earliest enrollment date set as the index date.Continuous enrollment for at least 24 months after and including the index date was required for inclusion in the data set, and eligible participants were 18-85 years of age as of their index year.Enrollees were excluded from the study if their EHRs included evidence of a pharmacy fill or written medication order for an ACE inhibitor, a class of blood pressure medication that can cause chronic cough.Eligible participants were divided into training/validation (75%, n = 245,161) and test (25%, n = 82,262) data sets.The number of positive sample class (i.e., with chronic cough) were 3,651 (1.49%) and 1,167 (1.42%) in the training and test set, respectively.The number of negative sample class (i.e., without chronic cough) were 241,510 (98.5%) and 81,095 (98.6%) in the training and test set, respectively.
Development of 3 claims-based chronic cough identification models
The gold standard positive class for the development of the claims-based algorithm was generated by implementing an NLP algorithm we developed previously using data available from EHRs 28 .This algorithm has also been replicated using the Kaiser Permanente Southern California Research Data Warehouse using the same integrated database as in the current study 27 .The algorithm defines chronic cough as the presence of at least 3 cough encounters (any combination of 3 sources of information: NLP-identified mentions of 'tussis' or any inflection of the word 'cough' in free-text provider notes, occurrences of acute cough-specific diagnosis code ICD-10 R05, or written medication orders for benzonatate or dextromethorphan) in EHRs within a 120-day period, with at least 56 days between the first and last cough encounters.The negative class comprised all individuals not identified as part of the gold standard positive class.
Three claims-based classification models were constructed using supervised machine learning methods to identify individuals with chronic cough from the sample population.The models were based on Extreme Gradient Boosted Trees (XG Boost, a decision tree ensemble), logistic regression, and neural network ensemble approaches; see below for methodological details specific to each model.
All models were constructed using diagnosis, procedure, prescription, provider specialty, and patient demographic information from the individuals in the sample.Diagnosis, procedure, provider specialty, and prescription features consisted of count values for each patient to represent the total number of occurrences (i.e., doctor visits, prescriptions filled, diagnoses received, etc.) of the same code the patient has received over the 24-month observation period.Demographic (age, regional location, & gender) and insurance type information was represented as indicators.In addition, the International Statistical Classification of Diseases and Related Health Problems Procedural Classification System (ICD-PCS) was used to map all procedures to higher-level (3-digit) procedure groupings using Clinical Classification Software developed by the Healthcare Cost and Utilization Project 36 .Generic Product Identifier groupings were used for National Drug Codes; we used the first 8 digits of the Generic Product Identifier to determine the generic drug name.
Feature selection was undertaken to reduce the number of independent variables used to identify chronic cough by removing irrelevant or less relevant features that negatively affected model performance.Feature importance was based on information gain for the XGBoost model and output of the LASSO procedure for the logistic model.Features were created using data from the index date (first chronic cough date for positive class or earliest enrollment date for negative class) through the next 24 months.A minimum code prevalence cutoff of 0.04 was used for all models.Since the data set comprised predominantly categorical variables, relevant classification features of the classifier were identified by setting odds ratio thresholds where the confidence intervals were < 0.8 or > 1.2.Any features not associated with chronic cough would have odds ratios close to 1.0 and would www.nature.com/scientificreports/be effectively removed 37 .Implementing this process across all 3 machine learning models reduced the overall number of independent features by 90%.Additional feature selection processes that were unique to each model were also used; details are provided below.Nested three-fold cross-validation (resampling to determine model performance on a held-out data set) was performed to select features and hyperparameters for all models 38 .Model performance during cross-validation was based on average precision (PR-AUC).Final model coefficients and hyperparameters were fit on the entire training set and performance was evaluated on the test set (25% of the data).We also assessed the performance of the 3 models with subpopulations from the test set comprising individuals with a diagnosis of cough, females, and individuals ≥ 65 years of age.
Different binary classification methods were developed and applied to determine the best algorithm to appropriately identify chronic cough.All 3 models incorporated cross-validation and hyperparameter tuning to optimize performance.For all models, a probability threshold of 0.5 was initially used to define accurate classification of chronic cough, and then adjusted to maximize precision and specificity.The final models used the following probability thresholds: XG Boost, 0.70; logistic regression, 0.30; neural network, 0.85.Secondary analyses stratified and evaluated model performance in the test data set among individuals with cough diagnoses (defined as any instance of an ICD-10 R05 code in the individual's history), female participants, and participants ≥ 65 years of age.A probability threshold of 0.70 was used for the XG Boost model in the cough diagnosis subgroup due to the small sample size.
XG Boost-based model
XGBoost is an ensemble-based decision tree algorithm that uses boosting and gradient descent to assign and adjust the weights on each tree, minimizing loss 39 .Ensemble methods fit an initial tree-based algorithm to the data, then build a second version of the model to improve upon (boost) the classification of the first.The process is iterated until the classification error (mean squared error) no longer decreases with each subsequent boost (i.e., gradient descent).
A Bayesian search method was used in tandem with a threefold cross-validation to tune the model's hyperparameters, in order to modify the constraints in constructing each tree during cross-validation.While optimizing for average precision we limited the maximum depth of the tree, the minimal weight needed for each child node, and the minimum loss reduction needed to progress further in a leaf node; we also tuned the balance between positive and negative weights and the subsampling ratio for each tree and training instance 40 .The final optimized hyperparameter values selected after tuning and cross-validation are listed in Supplemental Table 1.Diagnosis, procedure, provider specialty, and prescription features were represented as count values for each patient, corresponding to the total number of occurrences with the same code for each patient during the study period.Demographic data (age, regional location, binary gender) and insurance type information were represented as indicators.
Logistic regression model
For the logistic regression model, all features were converted to flags, so that counts became flags for 1 or more, 2 or more, and 3 or more.We constructed a logistic regression model with L1 regularization (i.e., least absolute shrinkage and selection operator [Lasso] regression) 41 .This technique adds a regularization term to the equation as a hyperparameter that acts as a penalty term to avoid overfitting.As a result, the most important features within the model may be assigned higher final coefficients, while less important features are ultimately set to zero.This is a particular strength in a data set with sparse data or a large number of features.In general, a hyperparameter that is too small will result in no regularization term and essentially an unpenalized logistic regression model.In such cases, the model will be overfit.Conversely, a larger hyperparameter will add too much weight and can lead to an underfit model.
Grid search was applied to the model along with a K-fold cross-validation method to tune for class weight and regularization strength (C) while optimizing for average precision.As the model was fit on the training set and internally validated on a held-out validation set, the K-fold cross-validation process (K = 3) was performed 3 times with a different randomly selected subsample held-out each time.Different values were tested for the C parameter (0.1, 1, and 10); the optimized value was 0.1.The class weight parameter was calibrated between a balanced weighting and its default weight assignment; the tuned model was optimized to use its default method where both positive and negative classes had equal weighting within the cost function of the algorithm 42 .All independent variables passed into the logistic regression model were represented as binary indicators of 0 or 1.Each diagnosis or procedure code was assigned one of 3 categories based on how frequently the code occurred for each patient (≥ 1, ≥ 2, or ≥ 3).
Neural network (deep learning)-based model
Neural networks, also known as deep learning models, comprise a series of algorithms developed to recognize patterns in data that identify an outcome-in this case, presence of chronic cough.Neural networks mimic the function of the human brain via layers of interconnected neurons.Simple neural networks comprise input, hidden, and output layers.We developed a neural network using Tensorflow software 43 .Hyperparameters were tuned using KerasTuner 44 .The model consisted of a multi-layered neural network, in which each layer contained 100 neurons and used the scaled exponential linear unit activation function 45 .The output layer used a sigmoid function to produce a probability that conveyed the likelihood of a given patient being classified as positive for chronic cough.The model used an L2 penalty as a regularization technique, which adds a penalty to the model that is equal to the square of the magnitude of the coefficients, to avoid overfitting 46 .Unlike L1/Lasso methods,
Figure 2 .
Figure 2. Comparative performance of 3 identification models of chronic cough.Receiver-operating characteristic (left) and precision-recall (right) curves are shown for 3 classification models of chronic cough, based on XG Boost (A), logistic regression (B), and neural networks (C).PPV, positive predictive value; ROC-AUC, receiver-operating characteristic, area under the curve.The orange line on each ROC plot represents the performance of a hypothetical random model with an AUC of 0.5.
Table 1 .
Performance of 3 claims-based predictive models of chronic cough.NPV negative predictive value; PPV positive predictive value; PR-AUC precision-recall, area under the curve; ROC-AUC receiver-operating characteristic, area under the curve.
Full sample Individuals with cough diagnoses Females Individuals ≥ 65 years of age
sample population was drawn from Optum's Integrated Clinical + Claims Database, which combines adjudicated medical and pharmacy claims with EHRs from the Optum Research Clinical Database.The latter database currently has > 101 million unique patients from ~ 60 provider delivery organizations in the United States and Puerto Rico, with an average of 45 months of observed data per patient.The integrated database includes health plan enrollment data; clinical information, including medications prescribed and administered; lab results, vital signs, and body measurements; diagnoses and procedures; and information derived from provider notes using proprietary NLP methods. | 5,789 | 2024-01-30T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Numerical approximation of Poisson problems in long domains
In this paper, we consider the Poisson equation on a"long"domain which is the Cartesian product of a one-dimensional long interval with a (d-1)-dimensional domain. The right-hand side is assumed to have a rank-1 tensor structure. We will present methods to construct approximations of the solution which have tensor structure and the computational effort is governed by only solving elliptic problems on lower-dimensional domains. A zero-th order tensor approximation is derived by using tools from asymptotic analysis (method 1). The resulting approximation is an elementary tensor and, hence has a fixed error which turns out to be very close to the best possible approximation of zero-th order. This approximation can be used as a starting guess for the derivation of higher-order tensor approximations by an alternating-least-squares (ALS) type method (method 2). Numerical experiments show that the ALS is converging towards the exact solution (although a rigorous and general theoretical framework is missing for our application). Method 3 is based on the derivation of a tensor approximation via exponential sums applied to discretised differential operators and their inverses. It can be proved that this method converges exponentially with respect to the tensor rank. We present numerical experiments which compare the performance and sensitivity of these three methods.
Introduction
In this paper, we consider elliptic partial differential equations on domains which are the Cartesian product of a "long" interval I = (− , ) with a (d − 1)-dimensional domain ω, the cross section -a typical application is the modelling of a flow in long cylinders.As a model problem we consider the Poisson equation with homogeneous Dirichlet boundary conditions and a right-hand side which is an elementary tensor ; i.e., the product of a univariate function (on the long interval) and a (d − 1)variate function on the cross section.Such problems have been studied by using asymptotic analysis, see., e.g., [2].Our first approximation (method 1) is based on this technique and approximates the solution by an elementary tensor where the function on the cross section is the solution of a Poissonproblem on the cross section and the corresponding univariate function is determined afterwards as the best approximation in the Sobolev space H 1 0 on the long interval.In Lemma 2 below, it is shown that this approximation converges exponentially with respect to the length of the cylinder for any subdomain I 0 × ω for fixed 0 < .However, for fixed this is a one-term approximation with a fixed error.
Method 2 uses the result of method 1 as the initial guess for an iterative procedure which is an alternating least squares (ALS) method.Recursively, one assumes that a rank-k tensor approximation of the solution has already been derived and then starts an iteration to compute the k + 1 term: a) one chooses an univariate function on I as an initial guess for this iteration and determines the function on the cross section as the best approximation in H 1 0 of the cross-section.In step b) the iteration is flipped and one fixes the new function on the cross section and determines the corresponding best approximation in H 1 0 of the interval.Steps a) and b) are iterated until a stopping criterion is reached and this gives the k + 1 term in the tensor approximation.We have performed numerical experiments which are reported in Section 4 which show that this method leads to a convergent approximation also for fixed as the tensor rank of the approximation increases.However, it turns out that this method is quite sensitive and requires that the inner iteration a), b) leads to an accurate approximation of the (k + 1) term in order to ensure that the outer iteration is converging.Furthermore, the numerical experiments that we have performed indicate, that the convergence speed can slow down as the number of outer iterations increases.Thus, this method is best suited when a medium approximation accuracy of the Poisson problem is required.An analysis of this method is available in the literature for the case that the higher-dimensional Hilbert space norms are generated from one-dimensional Hilbert space norms In this case, the ALS converges for any initial guess; we refer for this and more details to [10].In our application however, the norms are not generated from one-dimensional norms and convergence is still an open theoretical question.Further results on the alternating least squares approach can be found in [3,8,9].Method 3 is based on a different approach which employs numerical tensor calculus (see [4]).First one defines an exponential sum approximation of the function 1/x.Since the differential operator −∆ is of tensor form, the exponential sum, applied to the inverse of a discretisation of the Laplacian by a matrix which must preserve the tensor format, directly leads to a tensor approximation of the solution u.We emphasize that the explicit computation of the inverse of the discretisation matrix can be avoided by using the hierarchical format for their representation (see [5]).An advantage of this method is that a full theory is available which applies to our application and allows us to choose the tensor rank via an a priori error estimate.It also can be shown that the tensor approximation converges exponentially with respect to the tensor rank (cf.[4]).
The paper is structured as follows.In Section 2 we formulate the problem on the long product domain and introduce the assumptions on the tensor format of the right-hand side.The three different methods for constructing a tensor approximation of the solution are presented in Section 3. The results of numerical experiments are presented in Section 4 where the convergence and sensitivity of the different methods is investigated and compared.For the experiments we consider first the case that the cross section is the one-dimensional unit interval and then the more complicated case that the cross section is an L-shaped polygonal domain.Finally, in the concluding section we summarize the results and give an outlook.
Setting
Let ω be an open, bounded and connected Lipschitz domain in R n−1 , n ≥ 1.In the following we consider Poisson problems on domains of the form where is large.More specifically we are interested in Dirichlet boundary value problems of the form with weak formulation Specifically we are interested in right-hand sides f which have a tensor structure of the form , where g k is a univariate function and f , f k are functions which depend only on the (d − 1)-dimensional variable x ∈ ω.Here, we use the standard tensor notation (g In this paper, we will present and compare methods to approximate u in tensor form.
We consider a right-hand side of the form and derive a first approximation of u as the solution of the (n − 1)-dimensional problem on ω:
Numerical Approximation
In this section we derive three different methods to approximate problem (1).In all three methods we exploit the special structure of the domain Ω and the right-hand side F .Our goal is to reduce the original n-dimensional problem on Ω to one or more (n − 1)-dimensional problems on ω.Compared to standard methods like finite elements methods or finite difference methods, which solve the equations on Ω , this strategy can significantly reduce the computational cost since is considered large and the discretisation in the x 1 direction can be avoided.
Method 1: A one-term approximation based on an asymptotic analysis of problem (1)
Although the right-hand side F in (1) is independent of x 1 , it is easy to see that this is not the case for the solution u , i.e., due to the homogeneous Dirichlet boundary conditions it is clear that u depends on x 1 .However, if is large one can expect that u is approximately constant with respect to x 1 in a subdomain Ω 0 , where 0 < 0 and thus converges locally to a function independent of x 1 for → ∞.The asymptotic behaviour of the solution u when → ∞ has been investigated in [1].It can be shown that where u ∞ is the solution of (3), with an exponential rate of convergence.More precisely, the following theorem holds: Theorem 1 There exist constants c, α > 0 independent of s.t.
For a proof we refer to [1,Theorem 6.6].
Theorem 1 shows that 1 ⊗ u ∞ is a good approximation of u in Ω /2 when is large.This motivates to seek approximations of u in Ω which are of the form where ψ ∈ H 1 0 (− , ).Here, we choose ψ to be the solution of the following best approximation problem: Given u ∈ H 1 0 (Ω ) and u ∞ ∈ H 1 0 (ω), find ψ ∈ H 1 0 (− , ) s.t.
In order to solve problem (4) we define the functional and consider the variational problem of minimizing it with respect to θ ∈ H 1 0 (− , ).A simple computation shows that this is equivalent to finding θ ∈ H 1 0 (I ) such that The strong form of the resulting equation is The solution of this one-dimensional boundary value problem is given by This shows that an approximation of our original problem ( 1) is given by and Note that ψ (a, •) satisfies In Section 4 we report on various numerical experiments that show the approximation properties of this rather simple one-term approximation.Lemma 2 There exist constants c, c > 0 independent of such that, for δ < , (1) ω,δ := 4 and The right-hand side in (8) goes to 0 with an exponential rate of convergence if δ is bounded from below when → ∞.
Proof.For i = 1, 2, . .., let w i be the i-th eigenfunction of −∆ , i.e., and we normalize the eigenfunctions such that (w i , w j ) 2,ω = δ i,j and order them such that (λ i ) i is increasing monotonously.Furthermore let u ,i ∈ H 1 0 (Ω ) be the solution of (∇u ,i , ∇v Then one concludes from ( 7) and ( 10) that This shows that the solutions of ( 3) and ( 1) can be expressed as With ψ as in (5) we get Let δ < .Then, since ω w i w j dx = δ i,j , we get One has for any α > 0 and similarly Since λ 1 ≤ λ i for all i ∈ N and and We employ the estimates (13) and ( 14) in ( 12) and obtain ,ω , which shows the assertion.
Lemma 2 suggests that one cannot expect convergence of the approximation ψ (λ ∞ , •) ⊗ u ∞ on the whole domain Ω .Indeed it can be shown that, in general, Lemma 2 shows that the error on Ω can be estimated as follows: where λ 1 is as in (9).
Method 2: An alternating least squares type iteration
Method 1 can be interpreted as a 2-step algorithm to obtain an approximation u M1 of u .
• Step 1: Solve (3) in order to obtain an approximation of the form 1 ⊗ u ∞ which is nonconforming, i.e., does not belong to H 1 0 (Ω ).
• Step 2: Using u ∞ , find a function ψ that satisfies (4) in order to obtain the conforming In this section we extend this idea and seek approximations of the form by iteratively solving least squares problems similar to (4).We denote by the residual of the approximation and suggest the following iteration to obtain u M2 ,m : q (m) = arg min Then, given q (m) , find p (m) ∈ H 1 0 (I ) s.t.
The algorithm exhibits properties of a greedy algorithm.It is easy to see that in each step of the iteration the error decreases or stays constant.We focus here on its accuracy in comparison with the two other methods via numerical experiments.We emphasize that for tensors of order at least 3, local convergence (under suitable conditions) can be shown for the ALS iteration (see [3,8,9,10]).
One assumption in these papers is that the scalar product of the tensor space is generated by the scalar products of the single spaces -this, however, is not the case in our setting.We note that an analysis of the approximation (15) by the theory of singular value decomposition (SVD) is also not feasible since the function u is unknown.
In each step of the (outer) iteration above we need to solve at least two minimization problems (16) and (17).In the following we derive the strong formulations of these problems.
Resolution of (16)
As before an investigation of the functional shows that q (m) needs to satisfy for all q ∈ H 1 0 (ω), where For the right-hand side we obtain .
In order to obtain the solution of (17) we therefore have to solve in Remark 4 The constants p 1,m−1 , p2,j,m−1 , q 1,m and q2,j,m involve derivatives and Laplace-operators.Note that after solving (18) and ( 19) for q (m) and p (m) , discrete versions of ∆ q (m) and p (m) can be easily obtained via the same equations.Furthermore, since a numerical computation of the gradients can be avoided.
Method 3: Exploiting the tensor product structure of the operator
In this section we exploit the tensor product structure of the Laplace operator and the domain Ω .
Note, that we do not assume that ω has a tensor product structure.Furthermore the Laplace operator in our original problem (1) can be written as We discretise (1) with F as in (2) on a mesh G, e.g., by finite elements or finite differences on a tensor mesh, i.e., each mesh cell has the form (x i−1 , x i ) × τ j , where τ j is an element of the mesh for ω.The essential assumption is that the system matrix for the discrete version of −∆ in (20) is of the tensor form If we discretise with a finite difference scheme on an equidistant grid for I with step size h, then A 1 is the tridiagonal matrix h −2 tridiag [−1, 2, −1] and M 1 is the identity matrix.A finite element discretisation with piecewise linear elements leads as well to It can be shown that the inverse of the matrix A can be efficiently approximated with a sum of matrix exponentials.More precisely the following Theorem holds which is proved in [4], Proposition 9.34.
Theorem 5 Let M (j) , A (j) be positive definite matrices with λ (j) min and λ (j) max being the extreme eigenvalues of the generalized eigenvalue problem A (j) x = λM (j) x and set Then A −1 can be approximated by where the coefficients a ν , α ν > 0 are such that max .The error can be estimated by where M = ⊗ n j=1 M (j) .
Theorem 5 shows how the inverse of matrices of the form (21) can be approximated by sums of matrix exponentials.It is based on the approximability of the function 1/x by sums of exponentials in the interval [a, b].We refer to [4,6] for details how to choose r and the coefficients a ν,[a,b] , α ν, [a,b] in order to reach a given error tolerance ε( 1x , [a, b], r).Note that the interval [a, b] where 1/x needs to be approximated depends on the matrices A (j) and M (j) .Thus, if A changes a and b need to be recomputed which in turn has an influence on the optimal choice of the parameters a ν,[a,b] and α ν, [a,b] .
Numerical methods based on Theorem 5 can only be efficient if the occurring matrix exponential can be evaluated at low cost.In our setting we will need to compute the matrices The evaluation of the first matrix will typically be simpler.In the case where a finite difference scheme is employed and A 1 is a tridiagonal Toeplitz matrix while M 1 is the identity, the matrix exponential can be computed by diagonalizing A 1 , e.g., A 1 = SD 1 S −1 , and using exp The computation of exponentials for general matrices is more involved.We refer to [7] for an overview of different numerical methods.Here, we will make use of the Dunford-Cauchy integral (see [5]).For a matrix M we can write for contour C = ∂D which encircles all eigenvalues of M .We assume here that M is positive definite.Then the spectrum of M satisfies σ( M ) ⊂ (0, M ] and the following (infinite) parabola ds. ( The integrand decays exponentially for s → ±∞.Therefore ( 23) can be efficiently approximated by sinc quadrature, i.e., exp where h > 0 and should be chosen s.t.h = O (N + 1) −2/3 .We refer to [5] for an introduction to sinc quadrature and for error estimates for the approximation in (24).The parameters h and N in our implementation have been chosen such that quadrature errors become negligible compared to the overall discretisation error.For practical computations, the halving rule (cf .[5, §14.2.2.2]) could be faster while the Dunford-Schwartz representation with sinc quadrature is more suited for an error analysis.
The case of a planar cylinder
In this subsection we apply the methods derived in Section 3 to a simple model problem in two dimensions.We consider the planar cylinder and solve (1) for different right-hand sides ) and different lengths .The reduced problem (3) on ω = (−1, 1) is solved using a standard finite difference scheme.We compare the approximations of (1) to a reference solution u ref 2D, that is computed using a finite difference method on sufficiently refined two-dimensional grid.
In Table 1 we state the L 2 Ω 2D -errors of the approximations u M1 2D, for various values of and right-hand sides f .Having in mind that u M1 2D, is a rather simple one-term approximation that only requires the solution of one (n − 1)-dimensional problem (plus some postprocessing), the accuracy of the approximation is satisfactory especially for larger values of .
Figure 2 shows the pointwise, absolute error |u M1 2D, − u ref 2D, | in Ω for = 10 and f (x ) = tanh(4x + 1).As expected the accuracy of the approximation is very high in the interior of the planar cylinder (away from ± ).Lemma 2 (and Figure 2) suggests that the approximation in the interior of the cylinder is significantly better than on the whole domain Ω .Indeed, if the region of interest is only a subdomain Ω 0 ⊂ Ω , where 0 < , the error decreases exponentially as 0 → 0. Figure 3 shows the relative error with respect to 0 for = 20, 50 and the right-hand side f (x ) = tanh(4x + 1).We can see that the exponential convergence sets in almost immediately as l 0 moves away from .
To conclude, Method 1 can be used in applications where • only a limited approximation accuracy is required, • a good starting point for more accurate methods is needed, • the region of interest is a subdomain Ω 0 of Ω with 0 < .
In Method 2 we use u M1
2D, as starting value of the iteration which is then successively refined by approximating the residual in each step with a series of L 2 best approximations.In Table 2 we 2D, ,m for different values of and iterations m.We used f (x ) = tanh(4x + 1) throughout.
state the relative errors of this approach in the case f (x ) = tanh(4x + 1) for different values of and iteration steps.We can see that five iterations are sufficient to reduce the error of the initial approximation u M1 2D, by a factor 100 for all considered values of .However, in this case more iterations do not lead to significantly better results and the convergence seems to flatten.One explanation for this is that the residuals are increasingly difficult to approximate with each step of the iteration.After a few iterations a one-term approximation of these residuals of the form p (m) ⊗ q (m) therefore is not sufficiently accurate which leads to reduced decay of the error in the overall scheme.Note that in the case = 1, Ω cannot be considered as a "long" domain.Therefore, the initial approximation u M1 2D, only exhibits a low accuracy.Nevertheless the error of u M2 2D, ,m decays quickly as m increases and reaches a similar level of accuracy as for larger .This suggests that Method 2 can also be used for more general domains Ω .
In Table 3 we show the relative errors of the approximations u M3 2D, ,r for f (x ) = tanh(4x + 1) and different values of and r.As the theory predicts the error decays exponentially in r and is governed by the approximability of the function 1/x by exponential sums.Note that in this two-dimensional example the arising matrix exponentials could be computed via diagonalization of 2D, ,r for different values of and r.We used f (x ) = tanh(4x + 1) throughout.
the involved finite difference matrices.An approximation of the Dunford-Cauchy integral was not necessary in this case.
A three-dimensional domain with a non-rectangular cross section
In this section we consider the three-dimensional domain where ω is not a rectangle (see Figure 4).As before we solve problem (1) for different right-hand sides f and different values of .The reduced problem (3) on ω is solved using a standard 2D finite difference scheme.As 3D reference solution we use an accurate approximation using method 3, i.e. u M3 3D, ,r for r = 30, which is known to converge exponentially in r.Table 4 shows the relative errors of the approximations u M1 3D, for different values of and righthand sides f .As the theory predicts we cannot observe an exponentially decreasing error as gets large, since we measure the error on the whole domain Ω and not only a subdomain Ω −δ .As before we only have to solve one two-dimensional problem on ω in order to obtain the approximation u M1 3D, .3D, ,m for different values of and iterations m.We used f (x ) = tanh(x 1 x 2 ) throughout.
In Table 5 we show the relative errors of the approximations u M2 3D, ,m for f (x ) = tanh(x 1 x 2 ) and different values of and m (number of iterations).As in the 2D case this method significantly improves the initial approximation u M2 3D, ,1 = u M1 3D, using the alternating least squares type iteration.However, also here we observe that the convergence slows down when a certain accuracy is reached.We remark that a good starting point for the iteration is crucial for this method.In all our experiments u M1 3D, was a good choice which leads to a convergence behaviour similar to the ones in Table 5.Other choices often did not lead to satisfactory results.
In Table 6 we show the relative errors of the approximations u M3 3D, ,r again for f (x ) = tanh(x 1 x 2 ) and different values of and r.As before the error decays exponentially with respect to r.The arising matrix exponentials exp −α ν,[a,b] A x in these experiments were computed using the sinc quadrature approximation (24).The number of quadrature points N was chosen such that the corresponding quadrature error had an negligible effect on the overall approximation.3D, ,r for different values of and r.We used f (x ) = tanh(x 1 x 2 ) throughout.
Conclusion
We have presented three different methods for constructing tensor approximations to the solution of a Poisson equation on a long product domain for a right-hand side which is an elementary tensor.
The construction of a one-term tensor approximation is based on asymptotic analysis.The approximation converges exponentially (on a fixed subdomain) as the length of the cylinder goes to infinity.However, the error is fixed for fixed length since the approximation consists of only one term.The cost for computing this approximation is very low -it consists of solving a Poisson-type problem on the cross section and a cheap post-processing step to find the univariate function in the one-term tensor approximation.
The ALS method uses this elementary tensor and generates step-by-step a rank-k approximation.The computation of the m-th term in the tensor approximation itself requires an inner iteration.If one is interested in only a moderate accuracy (but improved accuracy compared to the initial approximation) this method is still relatively cheap and significantly improves the accuracy.However, the theory for ALS for this application is not fully developed and the definition of a good stopping criterion is based on heuristics and experiments.
Finally the approximation which is based on exponential sums is the method of choice if a higher accuracy is required.A well developed a priori error analysis allows us to choose the tensor rank in the approximation in a very economic way.Since the method is converging exponentially with respect to the tensor rank, the method is also very efficient (but more expensive than the first two methods for the very first terms in the tensor representation).However, its implementation requires the realization of inverses of discretisation matrices in a sparse H-matrix format and a contour quadrature approximation of the Cauchy-Dunford integral by sinc quadrature by using a non-trivial parametrisation of the contour.
We expect that these methods can be further developed and an error analysis which takes into account all error sources (contour quadrature, discretisation, iteration error, asymptotics with respect to the length of the cylinder, H-matrix approximation) seems to be feasible.Also the methods are interesting in the context of a-posteriori error analysis to estimate the error due to the truncation of the tensor representation at a cost which is proportional to the solution of problems on the cross sections.We further expect that more general product domains of the form × d m=1 ω m for some ω m ∈ R dm with dimensions 1 ≤ d m ≤ d such that d m=1 d m = d and domains with outlets can be handled by our methods since also in this case zero-th order tensor approximation can be derived by asymptotic analysis (see [2]).
Figure 1
Figure1shows a plot of ψ (λ ∞ , •) for = 20 and λ ∞ = 2. Since ψ approaches 1 with an exponential rate as x 1 moves away from ± towards the origin, an analogous result to Theorem 1 can be shown for u M1 .
Table 1 :
Relative L 2 Ω 2D -errors of the approximations u M 1 2D, for different values of and f .
Table 2 :
Relative L 2 -errors of the approximations u M 2
Table 3 :
Relative L 2 -errors of the approximations u M 3
Table 4 :
Relative L 2 -errors of the approximations u M 1 3D, for different values of and f .
Table 5 :
Relative L 2 -errors of the approximations u M 2
Table 6 :
Relative L 2 -errors of the approximations u M 3 | 6,314 | 2018-11-06T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Efficient Parameter Estimation for Sparse SAR Imaging Based on Complex Image and Azimuth-Range Decouple
Sparse signal processing theory has been applied to synthetic aperture radar (SAR) imaging. In compressive sensing (CS), the sparsity is usually considered as a known parameter. However, it is unknown practically. For many functions of CS, we need to know this parameter. Therefore, the estimation of sparsity is crucial for sparse SAR imaging. The sparsity is determined by the size of regularization parameter. Several methods have been presented for automatically estimating the regularization parameter, and have been applied to sparse SAR imaging. However, these methods are deduced based on an observation matrix, which will entail huge computational and memory costs. In this paper, to enhance the computational efficiency, an efficient adaptive parameter estimation method for sparse SAR imaging is proposed. The complex image-based sparse SAR imaging method only considers the threshold operation of the complex image, which can reduce the computational costs significantly. By utilizing this feature, the parameter is pre-estimated based on a complex image. In order to estimate the sparsity accurately, adaptive parameter estimation is then processed in the raw data domain, combining with the pre-estimated parameter and azimuth-range decouple operators. The proposed method can reduce the computational complexity from a quadratic square order to a linear logarithm order, which can be used in the large-scale scene. Simulated and Gaofen-3 SAR data processing results demonstrate the validity of the proposed method.
Introduction
Synthetic aperture radar (SAR) is an important imaging technology that has been applied in environmental protection and marine observation [1,2]. In recent years, the sparse signal processing method based on CS [3] has been implemented in microwave imaging [4,5]. It can recover the scene by solving L q (0 < q ≤ 1) regularization.
In [6], Çetin et al. proposed a sparsity-driven SAR imaging model for achieving autofocusing and moving targets imaging. Zhang et al. [7] explored the principles and applications in sparse microwave imaging. Patel et al. [8] analyzed different azimuth sampling methods based on the CS model. Luo et al. [9] developed a multiple scatterers detection method for SAR tomography with CS approach. Hossein et al. [10] proposed a polarimetric SAR estimator under the frame of CS. In [11], Zhu reviewed the CS-based super-resolving algorithm. Zhang et al. [12] proposed a novel where y ∈ C N×1 is the SAR echo data vector, N = N a (azimuth) × N r (range), x ∈ C N×1 is the backscattered coefficient vector, Φ is the observation matrix, and n ∈ C N×1 is the noise vector.
For the data formation model Equation (1), if the considered scene x is sparse enough and the observation matrix Φ satisfies the restricted isometry property (RIP) [24], x can be reconstructed by solving the L 1 optimization problem: where λ is the regularization parameter. There are many algorithms to solve (e.g., Equation (2)), such as the convex optimization algorithm [25], Bayesian learning algorithm [26], nonconvex optimization algorithm [27,28], and greedy algorithm [29]. After reconstruction,x is transferred to a matrixX Φ .
Azimuth-Range Decouple-Based Sparse SAR Imaging
The azimuth-range decouple-based sparse SAR imaging method is proposed in [7,15], the echo simulation operator G(·) is used to replace the observation matrix Φ, which is the inverse of the imaging operator I(·), that is, G(·) = I −1 (·) ≈ Φ. Then the azimuth-range decouple-based sparse SAR data formation model is represented as follows: E a ∈ C N a ×N a is an azimuth downsampling matrix, E r ∈ C N r ×N r is a range downsampling matrix. Both E a and E r are the binary matrices to denote the downsampling strategy, which are no longer identity matrices, thus reducing the number of measurements. Y ∈ C N a ×N r is the SAR raw data matrix, X ∈ C N a ×N r is the backscattered coefficient matrix, and N ∈ C N a ×N r is the noise matrix.
For this data formation model, the considered scene can be reconstructed by solving the L 1 optimization problem:X where · 2 is the 2-norm of a matrix.
Complex Image-Based Sparse SAR Imaging
A complex image-based sparse SAR imaging method is proposed in [22]. This method first establishes the imaging model with the complex image after MF recovery as the input, then represents the reconstruction of sparse scene as an L 1 optimization problem, and finally utilizes the iterative recovery algorithm to get the focused high-resolution SAR imagery. The signal model is represented as follows: where X MF ∈ C N a ×N r is the MF-reconstructed SAR complex image and N 0 ∈ C N a ×N r is the noise matrix. For this model, the considered scene can also be reconstructed by solving the L 1 optimization problem:X
Automatic Parameter Estimation Method
Several methods have been presented for automatically estimating the regularization parameter. We choose the generalized cross-validation (GCV) method [14,30] as the parameter estimation method, which can estimate λ by minimizing the following cost function without knowing the noise variance: where Y ∈ C N a ×N r is the SAR raw data matrix, N is the scene size, and tr(·) is the trace operator of a matrix. H λ is given in Equation (8): In (8), and β is a small positive constant.
Efficient Adaptive Parameter Estimation for Sparse SAR Imaging
In this section, the parameter estimation method based on azimuth-range decouple and the parameter estimation method based on complex image are introduced. Next, we introduce the proposed method in detail. Finally, the computational complexity of these methods is analyzed.
The Adaptive Parameter Estimation Method Based on Azimuth-Range Decouple
Combining the azimuth-range decouple operators with GCV, we can get the adaptive parameter estimation method for sparse SAR imaging. Compared with the adaptive parameter estimation method based on observation matrix, this method can reduce the computational complexity. Considering that M(X λ , β) is a large diagonal matrix of N × N, the computational cost of the trace of it is also large, we replace the trace operator tr(·) with the sum operator. Equation (7) can be rewritten as follows: which is the cost function of the adaptive parameter estimation method based on azimuth-range decouple, where G(·) is the echo simulation operator andX λ1 is shown in Equation (4). There are several algorithms to achieve the sparse reconstruction, such as iterative soft thresholding (IST) [31] and complex approximated message passing (CAMP) [32,33]. In this paper, we choose CAMP as sparse reconstruction algorithm, which has been applied to constant false-alarm rate (CFAR) detection in sparse SAR imaging [34].
The optimal regularization parameter is estimated by minimizing Equation (9). However, considering that finding the optimal regularization parameter requires the iterative processing, the total computational cost of the adaptive parameter estimation method based on azimuth-range decouple is still large.
The Adaptive Parameter Estimation Method Based on Complex Image
Compared with the azimuth-range decouple-based sparse SAR imaging method, the complex image-based sparse SAR imaging method only considers the threshold operation, which can further reduce the computational and memory costs. Combining it with GCV, we can get the adaptive parameter estimation method for sparse SAR imaging based on complex image. Equation (7) can be rewritten as follows: which is the cost function of the adaptive parameter estimation method based on complex image, whereX λ2 is shown in Equation (6).
The Proposed Method
The proposed method is mainly for the case of the downsampled data. On the one hand, although the adaptive parameter estimation method based on azimuth-range decouple can estimate the sparsity accurately, as mentioned above, the total computational cost of this method is large. On the other hand, due to the energy dispersion and ambiguities, the estimated sparsity of the parameter estimation method based on complex image will be greater than the true value, and we cannot simply use the parameter estimation method based on complex image to replace the parameter estimation method based on azimuth-range decouple. Therefore, we need to find a method to adaptively estimate the sparsity accurately while having the lower computational complexity. A good solution is to combine these two adaptive methods together, utilizing the complex image to pre-estimate the parameter and reduce the iteration range, then estimating the accurate parameter with raw data.
The proposed method has three steps. First, set the iteration range of sparsity to [K min , K max ] and adaptively estimate the sparsity based on the complex SAR image which is reconstructed by the downsampled raw data. The pre-estimated sparsity is set to K mid , which is greater than the true value due to ambiguities and energy dispersion caused by downsampling. Second, update the iteration range from [K min , K max ] to [K min , K mid ]. Third, get the adaptive reconstructed image and the optimal adaptive result of sparsity K opt on the new range [K min , K mid ] based on raw data.
The flowchart is shown in Figure 1.
The Proposed Method
The proposed method is mainly for the case of the downsampled data. On the one hand, although the adaptive parameter estimation method based on azimuth-range decouple can estimate the sparsity accurately, as mentioned above, the total computational cost of this method is large. On the other hand, due to the energy dispersion and ambiguities, the estimated sparsity of the parameter estimation method based on complex image will be greater than the true value, and we cannot simply use the parameter estimation method based on complex image to replace the parameter estimation method based on azimuth-range decouple. Therefore, we need to find a method to adaptively estimate the sparsity accurately while having the lower computational complexity. A good solution is to combine these two adaptive methods together, utilizing the complex image to pre-estimate the parameter and reduce the iteration range, then estimating the accurate parameter with raw data.
The proposed method has three steps. First, set the iteration range of sparsity to The flowchart is shown in Figure 1. The details of adaptive parameter estimation based on azimuth-range decouple are shown in Algorithm 1, where [K min , K mid ] is the range of the sparsity; η λ,µ,CAMP (·) is the threshold function of CAMP.
Analysis of Computational Complexity
The computational complexity of different adaptive parameter estimation methods is analyzed in this section. A common characteristic of the adaptive parameter estimation methods mentioned above is that regularization parameter iterations are required. The difference lies in the different sparse reconstruction algorithms.
The measure of the computational complexity is the floating point operation (FLOP). Each FLOP represents a real addition operation or a real multiplication operation. In the observation matrix-based sparse SAR imaging method and azimuth-range decouple-based sparse SAR imaging method, the main calculation includes the imaging process, the echo simulation process, and the threshold process. The computational complexity of the threshold process is (8 n + n log 2 n) FLOPs, where n = N a × N r is the scene size. In the observation matrix-based sparse SAR imaging method, the imaging process and echo simulation process are two matrix multiplications. The main computational complexity of a single-step iteration of the observation-matrix-based sparse SAR imaging method is (16 mn + 8n + n log 2 n) FLOPs, where m is the sampling number. This computational complexity is approximately proportional to the quadratic square of the scene size.
In this paper, the chirp scaling [35] operator is chosen as the imaging operator. Therefore, I(·) and G(·) can be expressed as follows: where F a and F −1 a are the azimuth Fourier transform (FFT) operators and azimuth inverse Fourier transform (IFFT) operators, F r and F −1 r are the range FFT operators and range IFFT operators, Θ sc , Θ rc and Θ ac are three complex phase matrix. Chirp scaling and inverse chirp scaling both contain two FFTs, two IFFTs, and three time complex phase multiplications. According to [2], the computational complexity of FFT and IFFT with length l 0 is (5l 0 log 2 l 0 ). FLOPs, and the computational complexity of a complex multiplication operation is six FLOPs. Assuming that the data are sampled in the manner of uniform/nonuniform downsampling, the main computational complexity of a single-step iteration of the azimuth-range decouple-based sparse SAR imaging method is (46 n + 2m + 21n log 2 n) FLOPs, which is approximately proportional to the product of the linear logarithm of the scene size.
The complex image-based sparse SAR imaging method includes only threshold process. The computational complexity of a single-step iteration of this method is (8 n + n log 2 n) FLOPs, which is much lower than the azimuth-range decouple-based sparse SAR imaging method.
Let I represent the iteration steps of the recovery for sparse reconstruction algorithms. Let J and J 2 denote the number of iteration steps required for regularization parameter convergence when the iteration ranges of sparsity are [K min , K max ] and [K min , K mid ], respectively. Assuming that I = 20, J = 16, J 2 = J/4, the scene size n = 4096 × 4096, and the downsampling rate m/n = 80%, the computational complexity of different adaptive parameter estimation methods is shown in Table 1. Since the proposed method utilizes the complex image as the prior information to pre-estimate the parameter, the iteration range of the sparsity is reduced when the adaptive parameter estimation is processed in the raw data domain. Therefore, the proposed method has the lower computational complexity compared with the parameter estimation method based on azimuth-range decouple. For example, if the scene size is 4096 × 4096 and the downsampling rate is 80%, the proposed method can increase the computational efficiency about 3-4-fold.
Experiments
In this section, both simulation and real data experiments have been carried out to validate the effectiveness of the proposed method. The 1D simulation experiments compare the performance and reconstruction precision of the parameter estimation method based on observation matrix, parameter estimation method based on complex image and the proposed method. The 2D simulation experiments compare the adaptive result and computational complexity of different adaptive parameter estimation methods. Airborne SAR data and Gaofen-3 SAR data experiments are done to validate the ability of the proposed method to suppress energy dispersion and ambiguities. At last, the computational complexity of different adaptive parameter estimation methods is compared for different scene size.
1D Simulation
To validate the effectiveness of the proposed method, 1D simulation experiments are carried out. We set five point targets. Figure 2a shows the reconstructed images obtained by MF, parameter estimation method based on observation matrix, parameter estimation method based on complex image and the proposed method. In Figure 2a, the signal-to-noise ratio (SNR) is 15 dB and the downsampling rate is 80%. The adaptive λ opt of the adaptive parameter estimation method based on the observation matrix is 0.10 and the adaptive λ opt of the proposed method is 0.09. Due to downsampling, the adaptive λ of the adaptive parameter estimation method based on a complex image is 0.06, which is smaller than other two methods. From Figure 2a, we can conclude that the proposed method can effectively suppress the sidelobes and energy dispersion, and can recover the positions of target accurately compared with the positions of the ground truth. L 1 regularization is known as a biased estimator [36,37], and the bias would underestimate the intensities of the targets. Therefore, in Figure 2a, the target amplitude of the proposed method is lower than the ground truth.
image is 0.06, which is smaller than other two methods. From Figure 2a, we can conclude that the proposed method can effectively suppress the sidelobes and energy dispersion, and can recover the positions of target accurately compared with the positions of the ground truth. L1 regularization is known as a biased estimator [36,37], and the bias would underestimate the intensities of the targets. Therefore, in Figure 2a, the target amplitude of the proposed method is lower than the ground truth. In order to explore the accuracy of different adaptive parameter estimation methods, Figure 2b shows the relative mean square error (RMSE) curves of three methods at different SNR and downsampling rate, where the downsampling rate are 50% and 80%, respectively. It can be seen from Figure 2b that the proposed method has the similar sparse recovery performance as the adaptive method based on an observation matrix. The reconstruction precision of the adaptive method based on a complex image is worse than other two methods when the raw data are downsampled.
2D Simulation
In order to further analyze the effectiveness of the proposed method, 2D simulation experiments are carried out. The major simulation parameters are given in Table 2. The imaging results of nine point targets are shown in Figure 3. In Figure 3, the signal-to-noise ratio (SNR) is 20 dB and the downsampling rate is 80%. Figure 3a shows the image reconstructed by MF. Figure 3b To better compare the reconstruction results of different methods, Figure 3d shows the azimuth profile of the 2D simulation experiment. Due to the bias of L1 regularization, in Figure 3d, the target amplitude of the proposed method is lower than the ground truth. In order to explore the accuracy of different adaptive parameter estimation methods, Figure 2b shows the relative mean square error (RMSE) curves of three methods at different SNR and downsampling rate, where the downsampling rate are 50% and 80%, respectively. It can be seen from Figure 2b that the proposed method has the similar sparse recovery performance as the adaptive method based on an observation matrix. The reconstruction precision of the adaptive method based on a complex image is worse than other two methods when the raw data are downsampled.
2D Simulation
In order to further analyze the effectiveness of the proposed method, 2D simulation experiments are carried out. The major simulation parameters are given in Table 2. The imaging results of nine point targets are shown in Figure 3. In Figure 3, the signal-to-noise ratio (SNR) is 20 dB and the downsampling rate is 80%. Figure 3a shows the image reconstructed by MF. Figure 3b shows the image reconstructed by adaptive parameter estimation method based on complex image, with the adaptive result λ being 0.17. From Figure 3b, we can see that the adaptive result based on complex image is not accurate when the raw data are downsampled, with the sidelobes still existing. Figure 3c shows the image reconstructed by the proposed method, with the adaptive result λ opt being 0.32. To better compare the reconstruction results of different methods, Figure 3d shows the azimuth profile of the 2D simulation experiment. Due to the bias of L 1 regularization, in Figure 3d, the target amplitude of the proposed method is lower than the ground truth. Table 3 shows the adaptive λ and RMSE of the parameter estimation method based on azimuthrange decouple, the parameter estimation method based on complex image, and the proposed method, respectively. According to Table 4, the adaptive λ of the adaptive parameter estimation method based on complex image is smaller than other two methods when the raw data are downsampled, and varies with the SNR and downsampling rate. We can also conclude that the adaptive parameter estimation method based on azimuth-range decouple and the proposed method have almost the same sparse recovery performance. With the decrease in the downsampling rate, the RMSE of different adaptive parameter estimation methods increases. Therefore, the downsampling rate is crucial for the reconstruction accuracy of the adaptive parameter estimation methods. In this 2D simulation experiments with different SNR and downsampling rates are also carried out. Table 3 shows the adaptive λ and RMSE of the parameter estimation method based on azimuth-range decouple, the parameter estimation method based on complex image, and the proposed method, respectively. According to Table 4, the adaptive λ of the adaptive parameter estimation method based on complex image is smaller than other two methods when the raw data are downsampled, and varies with the SNR and downsampling rate. We can also conclude that the adaptive parameter estimation method based on azimuth-range decouple and the proposed method have almost the same sparse recovery performance. With the decrease in the downsampling rate, the RMSE of different adaptive parameter estimation methods increases. Therefore, the downsampling rate is crucial for the reconstruction accuracy of the adaptive parameter estimation methods. In this experiment, when the downsampling rate is 80% and the SNR is 25 dB, the proposed method has the smallest RMSE, which is the best result. Next, we will analyze the computational complexity. To illustrate that the proposed method has lower computational complexity, the computational complexity of different adaptive parameter estimation methods is compared for different scene size is represented in Figure 4. experiment, when the downsampling rate is 80% and the SNR is 25 dB, the proposed method has the smallest RMSE, which is the best result. Next, we will analyze the computational complexity. To illustrate that the proposed method has lower computational complexity, the computational complexity of different adaptive parameter estimation methods is compared for different scene size is represented in Figure 4. Figure 4 illustrates the computational complexity of three different adaptive parameter estimation methods for different scene size clearly. If the size of scene is over 1024 1024 × , the computational complexity of the adaptive parameter estimation method based on azimuth-range decouple increases dramatically. Although the computational complexity of the adaptive parameter estimation method based on complex image is the lowest, the adaptive result of this method is not accurate when the raw data are downsampled, as shown in Table 4. The proposed method utilizes complex image as prior information, thus having the lower computational complexity compared with the adaptive parameter estimation method based on azimuth-range decouple. Figure 4 illustrates the computational complexity of three different adaptive parameter estimation methods for different scene size clearly. If the size of scene is over 1024 × 1024, the computational complexity of the adaptive parameter estimation method based on azimuth-range decouple increases dramatically. Although the computational complexity of the adaptive parameter estimation method based on complex image is the lowest, the adaptive result of this method is not accurate when the raw data are downsampled, as shown in Table 4. The proposed method utilizes complex image as prior information, thus having the lower computational complexity compared with the adaptive parameter estimation method based on azimuth-range decouple.
Airborne Data
The airborne SAR data processing results are shown in Figure 5. The raw data are 80% randomly downsampled, received by the C-band airborne SAR system of Institute of Electronics, Chinese Academy of Sciences. The accurate sparsity of this scene is 0.02.
Airborne Data
The airborne SAR data processing results are shown in Figure 5. The raw data are 80% randomly downsampled, received by the C-band airborne SAR system of Institute of Electronics, Chinese Academy of Sciences. The accurate sparsity of this scene is 0.02. In order to better evaluate the performance of different adaptive methods, the integrated sidelobe ratio (ISLR) is chosen to quantitatively measure the ability to suppress the energy dispersion [2]: total main 10 main where main P is the main-lobe power, total P is the total power. Figure 5a shows the image reconstructed by MF, with the obvious energy dispersion, and Figure 5d is the azimuth profile of the imaging result of MF, with the ISLR being −6.59 dB. Figure 5b shows the imaging result of the adaptive parameter estimation method based on complex image, with the adaptive result of sparsity mid K = 0.21. Figure 5e is the corresponding azimuth profile, with ISLR being −9.14 dB. From Figure 5b,e, we can see that when the raw data are downsampled, the adaptive parameter estimation method based on complex image cannot obtain an accurate result, with energy dispersion still existing. Figure 5c is the imaging result of the proposed method, with the adaptive result of sparsity opt K = 0.02, which converges to the accurate sparsity of the scene. Figure 5f is the azimuth profile of the imaging result of the proposed method, with ISLR being −10.55 dB. The proposed method can accurately estimate the sparsity and effectively suppress the noise and energy dispersion. In order to better evaluate the performance of different adaptive methods, the integrated sidelobe ratio (ISLR) is chosen to quantitatively measure the ability to suppress the energy dispersion [2]: where P main is the main-lobe power, P total is the total power. Figure 5a shows the image reconstructed by MF, with the obvious energy dispersion, and Figure 5d is the azimuth profile of the imaging result of MF, with the ISLR being −6.59 dB. Figure 5b shows the imaging result of the adaptive parameter estimation method based on complex image, with the adaptive result of sparsity K mid = 0.21. Figure 5e is the corresponding azimuth profile, with ISLR being −9.14 dB. From Figure 5b,e, we can see that when the raw data are downsampled, the adaptive parameter estimation method based on complex image cannot obtain an accurate result, with energy dispersion still existing. Figure 5c is the imaging result of the proposed method, with the adaptive result of sparsity K opt = 0.02, which converges to the accurate sparsity of the scene. Figure 5f is the azimuth profile of the imaging result of the proposed method, with ISLR being −10.55 dB. The proposed method can accurately estimate the sparsity and effectively suppress the noise and energy dispersion.
Gaofen-3 Data
The proposed method is also applicable to the spaceborne data. The Gaofen-3 satellite is a remote sensing satellite of China's high-resolution special project, which was launched in August 2016. It is the first C-band multipolarized SAR imaging satellite with a resolution of 1 m. Gaofen-3 data are processed to verify the background clutter and noise suppressing ability and ambiguity suppressing ability of the proposed method. In this experiment, we perform 80% random downsampling for the fully sampled data. The Gaofen-3 data processing results are shown in Figure 6.
Gaofen-3 Data
The proposed method is also applicable to the spaceborne data. The Gaofen-3 satellite is a remote sensing satellite of China's high-resolution special project, which was launched in August 2016. It is the first C-band multipolarized SAR imaging satellite with a resolution of 1 m. Gaofen-3 data are processed to verify the background clutter and noise suppressing ability and ambiguity suppressing ability of the proposed method. In this experiment, we perform 80% random downsampling for the fully sampled data. The Gaofen-3 data processing results are shown in Figure 6. Figure 6a gives the MF imaging results of the downsampled raw data, with the obvious energy dispersion and azimuth ambiguities. Figure 6b shows the imaging result of the adaptive parameter estimation method based on azimuth-range decouple, with the adaptive result of sparsity opt K = 0.3514. It can be seen that this method can reconstruct the scene successfully and suppress the noise, energy dispersion and ambiguities efficiently. Figure 6c shows the imaging result of the adaptive parameter estimation method based on complex image, with the adaptive result of sparsity mid K = Figure 6a gives the MF imaging results of the downsampled raw data, with the obvious energy dispersion and azimuth ambiguities. Figure 6b shows the imaging result of the adaptive parameter estimation method based on azimuth-range decouple, with the adaptive result of sparsity K opt = 0.3514. It can be seen that this method can reconstruct the scene successfully and suppress the noise, energy dispersion and ambiguities efficiently. Figure 6c shows the imaging result of the adaptive parameter estimation method based on complex image, with the adaptive result of sparsity K mid = 0.46. From Figure 6c, we can see that the adaptive result based on complex image is not accurate, with energy dispersion still existing. These two experimental results prove that the parameter estimation method based on azimuth-range decouple and the parameter estimation method based on complex image are not equivalent when the raw data are downsampled. However, we can use this pre-estimated sparsity as prior information to reduce the iteration ranges. Figure 6d is the imaging result of the proposed method, with the adaptive result of sparsity K opt = 0.3522, which is basically the same with the adaptive parameter estimation method based on azimuth-range decouple.
To further evaluate the ability of different adaptive methods to suppress the noise and ambiguity, target-to-background ratio (TBR) [38] and azimuth ambiguity-to-signal ratio (AASR) [23] are selected as two evaluation indicators. Their discrete expressions are defined as follows: TBR(X) = 20 log 10 where B is the background area, N B is the pixel number in B, and T is the target region. AASR = 10 log 10 where A is target region, N a is the pixels number in A, M a is the ambiguity area, and N m is the pixel number in M a . In this experiment, we chose five ships as performance test regions, as shown in the corresponding red frames. These five ships are represented as Ship 1-5, from left to right. Their corresponding azimuth ambiguity areas are shown in the blue frames.
The TBR of these five ships reconstructed by different methods are shown in Table 4. It can be seen from Table 5 that the proposed method can suppress the noise and energy dispersion effectively when the raw data are downsampled. Table 5. AASR of target regions based on different methods with downsampled data (80% downsampling).
Methods
Azimuth Ambiguity-to-Signal Ratio (dB) The AASR of these five ships reconstructed by different methods are shown in Table 5. From Table 5, we can see that the adaptive parameter estimation method based on complex image cannot suppress the azimuth ambiguity effectively. As a contrast, the adaptive parameter estimation method based on azimuth-range decouple and the proposed method both have the ability to decrease the azimuth ambiguity-to-signal ratio.
It can be seen from Tables 4 and 5 that the adaptive parameter estimation method based on azimuth-range decouple and the proposed method have almost the same sparse recovery performance. According to the previous analysis and the simulation experiments, the proposed method has the lower computational complexity, which can be used in the large-scale scene.
Conclusions
In this paper, an efficient adaptive parameter estimation method for sparse SAR imaging based on complex image and azimuth-range decouple is proposed. The proposed method combines the advantages of the azimuth-range decouple-based sparse SAR imaging and the complex image-based sparse SAR imaging method. In the proposed method, the parameter is pre-estimated based on the complex image. Adaptive parameter estimation is then processed in the raw data domain combining with the pre-estimated parameter and azimuth-range decouple operators. Compared with the adaptive parameter estimation method based on complex image, the proposed method can estimate the sparsity accurately when the raw data are downsampled. Compared with the adaptive parameter estimation method based on azimuth-range decouple, the proposed method has the lower computational complexity, which can be used in the large-scale scene. The simulation, airborne SAR data and Gaofen-3 SAR data experiment results demonstrate its validity. | 7,231 | 2019-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Mechanically scanned interference pattern structured illumination imaging
We present a fully lensless single pixel imaging technique using mechanically scanned interference patterns. The method uses only simple, flat optics; no lenses, curved mirrors, or acousto-optics are used in pattern formation or detection. The resolution is limited by the numerical aperture of the angular access to the object, with a fundamental limit of a quarter wavelength and no fundamental limit on working distance. While it is slower than some similar techniques, the lack of a lens objective and simplification of the required optics could make it more applicable in difficult wavelength regimes such as UV or X-ray. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
equivalent to those mentioned above for shadow imaging, most SI methods utilize lenses for pattern formation and projection [36][37][38].
With IPSII, interfering coherent beams make high-resolution patterns without the need for a projection lens and with no fundamental limit on working distance. However, for various reasons, most forms of IPSII employ lenses or other limiting optical elements. In structured illumination microscopy (SIM) [3,5], interference patterns are projected through the microscope objective of a conventional microscope, effectively making each pixel a 2 × 2 IPSII image, doubling the resolution [4] and improving optical sectioning [39]. Later research created interference patterns directly on the target, without passing through a lens, decoupling resolution and DOF [1,7,8]. These experiments used a small number of fixed beam angles, limiting the FOV for a single-pixel detector. As such, it was augmented by traditional imaging to increase the number of pixels in the image.
Further research with structured illumination includes techniques such as SPIFI [37,38] and CHIRPT [12,13] that multiplex spatial information into the signal frequency spectrum. A similar technique, DEEP [9,10,12], shows that spatial frequency can also be effectively multiplexed onto the signal time frequency spectrum. More recently, F-basis [11] demonstrated single-step 3D imaging with all information stored in the temporal frequency spectrum of the temporal signal of a single detector. DEEP and F-basis use acousto-optics to split the illumination into two or more beams [40] that can be quickly scanned, but are still limited by an objective lens used to recombine the beams. Axial structured imaging [14] was developed as an alternative to optical sectioning or light-sheet microscopy.
In this paper we present theory and proof-of-principle experiments of a truly lensless, singlepixel mechanically scanned IPSII technique that requires only flat mirrors and flat beam-splitters. Our design is based on an interferometer with computer controlled mirrors, used to form variable interference patterns which illuminate the object. This allows measurement of arbitrary spatial frequency components, resulting in a pixel count limited only by the precision of the mirrors, and an FOV limited only by the size of our laser beams. The resolution is limited by the numerical aperture of a single beam splitter. We show how this design could be used to measure phase as well as intensity of light passing through an object, essentially allowing for digital holography with a single pixel detector and an arbitrary FOV and working distance. With straightforward back-propagation methods, this would produce 3D images of absorptive, transparent, or complex objects.
It has been noted that the speed of mechanical scanning methods would be limited [10], and this is indeed the case with our design. We have not attempted to optimize the speed of our method, as fast optical IPSII technologies already exist (such as F-basis), and our method is not a good candidate for high speed optical microscopy. However, reasonable imaging speeds should be obtainable since the mechanical requirements of our system are similar to some widely-used mechanically-scanned implementations of LIDAR and confocal laser scanning microscopy.
The key advantage to MAS-IPSII is the lack of focusing elements, which should make it better suited than similar methods to UV and x-ray imaging at high resolutions. Even with small angles, x-ray IPSII could push the state of the art in x-ray resolution, as IPSII can achieve wavelength-scale resolution with an angle scan range of just 15 • per beam. Another advantage of MAS-IPSII is that it reduces IPSII to its simplest form, with two directly controllable beams with well defined, separate, measurable, and manipulatable beam modes that stay consistent throughout the measurement process. This allows us to experiment with issues of concern with all IPSII methods, such as the effects of wavefront distortions, errors in fringe angle and spacing, and the effect of wavenumber-dependent shadows and glare.
IPSII signal equation
In this section we derive the signal measured by the detector. This derivation applies to many forms of IPSII. It does not apply to methods which use more than two beams at a time [5] without modulating the signal generated from individual beam pairs at different frequencies (as in DEEP and F-basis) such that they can still be considered separately.
In IPSII, coherent beams overlap to create an interference pattern. A photodetector with a uniform response over the scale of the object measures reflection off or transmission through the object, yielding information about the overlap between the interference pattern and the object. This signal is measured for different interference patterns, generated by varying the angle between the beams. We assume two beams (laser beams in our case) with the same wavelength λ, aside from a small frequency offset ∆ω, which is negligible except where explicitly included in the following equations. The frequency offset sweeps the phase, causing fringes to move across the object and giving measurements of each spatial frequency at variable phases. An alternate method is to make a measurement at four discrete phases [31].
We assume a 2D object-light interaction m(x, y), which describes the object's response in amplitude and phase to an incident wave. For example, depending on whether the detector is placed in front or behind the object, m(x, y) may represent the complex Fresnel reflectivity or transmissivity, respectively, of the object at a given point into the solid angle subtended by the detector from that point. We also define M(x, y) = |m(x, y)| 2 . In the case of reflectivity or transmissivity, for example, this is the function that would be measured in conventional imaging-all phase information is lost in M(x, y).
The two lasers beam profiles are described by the complex transverse mode functions where x 1,2 and y 1,2 are the transverse beam coordinates, the real functions A 1,2 are the transverse field amplitudes of the modes, and the ϕ 1,2 functions represent the position dependent phases of the modes. If the wavefronts are flat, the resulting patterns will be sinusoidal in nature, such that individual measurements obviously correspond to spatial frequency components of the object. If the wavefronts are not flat, we show that the signal equation may still be put in terms of a Fourier transform. We assume imaging occurs within a small enough volume that we can ignore diffraction of the beam mode (i.e. we have dropped the axial dependence of the transverse amplitude and phase profiles aside from the phase propagation). The resulting models for the electric fields of the two beams are, where ì k 1,2 are the individual beam wave vectors, and differ from each other only in direction (neglecting the small frequency shift noted earlier) such that The lasers overlap on the object at an angle θ 1 + θ 2 from each other (see Fig. 1) and are oriented at an azimuthal angle φ. For this derivation, we assume they are symmetrically oriented around the z-axis of the object plane (i.e. θ 1 = θ 2 = θ) such that the difference between the k 1,2 vectors is in the x, y plane at an angle of φ from the x-axis. They are also positioned such that the centers of each mode are overlapping (i.e. x 1,2 = y 1,2 = 0 at x = y = 0). The mode of each laser is projected onto the object plane, resulting in a transform from x 1,2 and y 1,2 to x and y that is a function of θ and φ, though for small values of θ the transform is trivial (x 1,2 ≈ x and y 1,2 ≈ y). The intensity profile I of the interference pattern resulting from the overlapping beams contains a term constant in time plus an oscillating portion that is the product of an oscillating sinusoidal interference pattern and the individual modes, where k x = 2k l sin θ cos φ and k y = 2k l sin θ sin φ. The spacing of the fringes in the interference pattern (ignoring the contributions from A 1,2 ) is given by where n is the index of refraction of the medium. These interference fringes may be characterized by another vector k xx + k yŷ (not to be confused with the k-vectors of the lasers, ì k 1,2 ). The measured signal is proportional to the total power reflected from the object, or transmitted through the object, where C is a constant in time. (Note that M and A 1,2 are functions of x, y, though we have stopped explicitly calling that out in our notation for brevity.) This signal is further processed by performing dual-phase demodulation to extract the quadrature oscillating components of the signal. This results in the complex time averaged signal s(k x , k y ) (note that the complex phase of s represents the phase of s instead of the phase of the electric field waves). We also define a function combining the object with the beam profiles M (x, y) = M(x, y)A 1 (x, y)A * 2 (x, y). The signal equation then simplifies to which is easily recognizable as the Fourier transform of M (x, y) evaluated at (k x , k y ) in k-space. The transform in Eq. (8) becomes significantly more complicated if there is a non-negligible dependence on k x,y in M , and we are not aware of any general method of inverting such a transform, even if this dependence is known. Such k-dependence arises in A 1,2 because of the angle dependence of the transform from x 1,2 , y 1,2 to x, y. It may also arise when M(x, y) itself is dependent on k x,y which may occur if sampling only the light scattered in a single direction because of changes in the 'glare' off of the object as the direction of the illumination changes.
The k x,y dependence of A 1,2 can be made negligible under appropriate experimental conditions discussed latter. The k x,y dependence of M(x, y) can be made negligible by sampling a large solid angle. For example, a large detector can be placed directly behind the object for transmission imaging, or for reflection imaging an integrating sphere (with slots cut for beam access) or the average signal from multiple detectors at various angles could be used.
This treatment also applies to purely intensity dependent (i.e. incoherent) interactions such as fluorescence imaging or diffuse reflection, with a slight modification. If the object function is better described directly as an intensity response M(x, y) (e.g. describing a fluorophore density and response, which would be inappropriate to describe with a complex response m including phase shifts) then the object-light interaction is better modeled by directly multiplying the function M and the intensity of the interference pattern (instead of the electric fields). In this case, however, the derivation above proceeds identically from the second line in Eq. (7). While the end result is the same in this case, some modifications of the experiment, such as placing the object in just one beam, give different results.
IPSII imaging methods
There are several imaging opportunities readily apparent in Eq. (8). The process directly measures spatial frequency components of M (x, y) in k-space. After measuring sufficient information in k-space, a simple inverse transform gives you M (x, y), which is the product of three functions M(x, y), A 1 (x, y), and A 2 (x, y). Any one of these three may be effectively measured if the other two are known, or constant with respect to x and y. Any such method must also deal with the k x,y dependence of the beam profiles.
One such method is to make an intensity image of an object (i.e. M(x, y)). This could be a measurement of transmission or reflection, or even of incoherent processes such as fluorescence. This would be done using known, smoothly varying beam profiles. If the beam intensity is roughly constant over the object, the k x,y dependence of A 1,2 is removed, up to an overall cos θ intensity dependence, which may be exactly compensated for. A demonstration of this method is discussed in section 5.
Another imaging opportunity apparent in Eq. (8) is the possibility to measure complex fields. By setting m(x, y) and A 2 (x, y) to be constants, one can measure A 1 (x, y). This could be used to characterize the wavefronts of a laser beam. Or, by inserting an object into the beam, the complex transmission or reflection of the object, propagated to the measurement location, can be measured. By back propagating the result (which may be done with the full phase information), the full 3D complex object could imaged. Similar to other structured illumination holographic methods (see [41][42][43][44] for example), this could be useful as a single pixel method of acquiring high resolution digital transmission holograms of an object without a high resolution detector array.
Ideally, for hologram measurements only the angle of the reference beam would be scanned relative to the detector screen to avoid changing the projection of the light-field to be measured during the scan. Because only one beam is scanned and the beams are no longer symmetric about the z-axis, the fringe spacing given in Eq. (6) would be modified to d = λ/(n sin θ cos θ), decreasing the maximum resolution of this light field imaging by a factor of 2 relative to the object imaging resolution given in Eq. (9).
A third possible use of Eq. (8) is a simple method to characterize the intensity profile of a laser. While this could also be done with full phase information, using the aforementioned holographic imaging technique, doing so would require a quality reference beam (e.g. from spatial filtering, or using a separate, phase-locked laser with a clean mode). If only the intensity profile is needed, both beams can be simply derived from the same source (as in the interferometers described in section 5), such that A 1 (x, y) = A 2 (x, y). If M(x, y) is set to a constant M(x, y) simplifies to | A 1 (x, y)| 2 = | A 2 (x, y)| 2 . This method is illustrated in section 5.
IPSII imaging properties
The resolution in IPSII is related to the minimum fringe width, which depends on the maximum beam angle used. The pixel size dx min for a maximum angle between the interfering waves θ max is In IPSII, any point within the volume where the beams overlap for all beam angles will be imaged and in focus [10]. The fringes in an interference pattern are planar, and do not change in the z direction, so everywhere in the imaging volume is equally 'in focus' [1]. There is, however, an effective FOV due to the properties of Fourier transforms. Objects within the interference volume but outside of the FOV will be aliased onto the FOV [45]. The FOV of the reconstructed image depends on the spacing of measured points in k-space dk, and is given by The FOV may also be limited by the response region of the detector or the beam size. In fact, by intentionally limiting the field of view (using an aperture, for example), aliasing can be eliminated. One problem which could impose similar constraints as a DOF could occur when an object with protruding features casts shadows in the interference pattern, blocking one of the beams in some areas of illumination. The illumination in these areas will not oscillate, and will not contribute to the k-space measurement. Unfortunately, these shadows change with each k-space measurement, so the effect is more complicated than shadows in conventional imaging. Numerical calculations suggest that this tends to add distortions around protruding features. While this could be useful for identifying height changes, it may also distort an image beyond the point of being useful. This effect, which should be manifest in other forms of IPSII, appears to be an unexplored topic, and further work is needed to fully understand it.
The speed of image acquisition can be limited by mechanical limitations (as in our current implementation, discussed later), or by photon noise. Because each measurement collects light from the entire object, the photon noise limits in IPSII are equivalent to those of conventional wide-field imaging. If imaging speed is limited by photon noise, and if local intensity is limited to prevent photodamage or photobleaching, IPSII can, in principle, be much faster than rastering techniques. For an N pixel image, wide-field techniques like IPSII can be a factor of N faster than methods in which light is collected only from one region at a time. For 3D imaging, IPSII has an advantage over traditional optical sectioning [11], which must reject out-of-focus light with each measurement. Furthermore, because a multi-pixel detector is not needed, a wider variety of detector technologies are available, potentially reducing detector noise and shortening integration times. Two designs based on a Mach-Zehnder interferometer are presented. Both use computer controlled mirrors and a single piezomounted mirror for a phase sweep. The Mach-Zehnder layouts allow the angle between beams to vary from positive to negative and through the zero point. The first (a) requires a minimal number of optics, but the maximum angle is limited by the beam size, as the beam overlap diminishes with angle (demonstrated in by the picture on the right). This was the setup used to generate the 1D images shown in Fig. 3. The second implementation (b) adds another pair of mirrors to keep the beams centered during the angle scan. It also includes a bowtie configuration after the first beam splitter to simplify balancing the path lengths using the translation stages indicated by white arrows. This was the setup used for the 2D images shown in Fig. 4.
Experiment
IPSII requires (at least) two overlapping coherent beams, with spatial and temporal coherence lengths greater than the desired DOF and FOV, and a method to control the angles of the beams. We also need a way to scan and measure the relative phase of the two beams. To avoid aliasing, IPSII also needs a method to mechanically limit the FOV. Figure 2 shows two schematics we have implemented, both based on a Mach-Zehnder interferometer. The designs allow measurement of both positive and negative spatial frequencies. This can be helpful in practice to compensate for some beam wavefront imperfections, and would be necessary to implement holography as discussed in section 3. They also produce two outputs which are equal up to a π phase shift. We use one pattern to illuminate the object, and the other to illuminate a pinhole used as a phase reference. Separating the pinhole from the target object is convenient, but not strictly necessary.
The frequency difference in our setup is generated by linearly scanning the length of one arm of the interferometer with a mirror mounted on a piezo-electric transducer. Other similar phase scanning methods [46] could be used. Alternatively, acousto-optics or moving diffraction gratings could be used, as in other IPSII related methods. The advantage of acousto-optics is that the modulation is at a much higher frequency, which allows data to be taken faster. Higher frequency modulation also avoids noise at lower frequencies, which generally leads to a better signal-to-noise ratio. Some other methods using acousto-optics also use it as an angle scanning method [9][10][11], but in this case a high-NA lens is needed to reconverge the beams and amplify the angle, which diminishes some of the advantages of IPSII. To take an image, the mirrors are set to produce a particular interference pattern. Then the phase of the interferometer is ramped using the piezo-mounted mirror. Digital lock-in detection is applied to determine the quadrature components of the object signal relative to the signal from the detector behind the pinhole. These give the phase and amplitude of the spatial frequency component for the given interference pattern. The mirrors are then repositioned to create a different pattern, and this process is repeated. Once all Fourier coefficients have been measured, a simple inverse Fourier transform reconstructs the image. The data gathering and interpreting process is demonstrated in Fig. 3 using the simple 1D imaging setup shown in Fig. 2(a). The figure shows both an 'image' of a 1D object (a pair of vertically oriented wires) as well as a measurement of the laser beam profile, using the beam measurement method described in Sec. 5. The raw data for one pattern and the power spectrum of the resulting k-space measurements are shown, along with the image reconstructions.
The speed of our setup was mainly limited by the equipment available to us. We take data at a rate of about 1 k-space point/second, so that 2D image scans can take hours or days depending on the pixel count (the 2D images presented in this paper were taken in about a day). However, with better equipment and engineering the speed of the process could be reduced to just the speed of the raster scan of the mirror setup. For example, commonly available rotation stages with sufficient precision for 10 3 pixels per row could scan at least one k-space row/second, or about 15 minutes for a 1 megapixel image. Use of galvos or spinning mirrors could also be used to greatly increase the speed, but would also increase the engineering complexity.
Other ways to speed up image acquisition in our method (and other IPSII techniques) include various techniques developed for magnetic resonance imaging (MRI), such as partial Fourier reconstruction [47], parallel imaging [48], and compressive sensing [49]. Parallel imaging with IPSII has been demonstrated using conventional imaging optics [1]. Parallel imaging could be improved using the algorithms developed for MRI, such as SENSE [50] or GRAPPA [51]. These schemes only require the detectors to have slowly varying and different spatial response functions. These auto-calibrating algorithms could be implemented to perform parallel imaging without a lens, or with a low quality or poorly focused lens, as aberrations and focal blur mainly affect the individual detector responses (i.e. the object area contributing to the signal for each sensor), which are 'stitched' together by the auto-calibration. The speed up would be proportional to the number of sensors (e.g. the number of pixels in a sensor array) . Figure 4 shows the 2D image reconstructions resulting from imaging a USAF 1951 test target with varying FOV and resolution. These data were taken with the setup shown in Fig. 2(b), with a working distance (i.e. between the last beam splitter and the object) of about 100 mm. The measured resolutions agree with theoretical expectations for the maximum angle used. We tested resolutions to about 2µm, corresponding to an effective NA of . 12. As resolution appears only limited by the range of our motorized mounts and mirror size, higher resolution should be possible with a setup that allows for larger angles.
Results
The combination of effective NA and working distance are about at the limit of commercially available ultra-long working distance microscope objectives. Pushing past that mark with a setup like the one we used would only require larger mirrors. Optically flat mirrors are commercially available with diameters much greater than the commercially available lenses with an NA>.1.
The signal to noise ratio (SNR) in the images we took was limited by amplitude noise in our laser and technical noise in our digitization equipment (all of which were limited by equipment budget), and could be readily improved by better equipment and greater attention to signal engineering and noise isolation. Another limitation was the small range of our piezo actuator, which limited the signal to only 5-10 phase oscillations averaged over in our digital lock-in method. These limitations could be overcome by using other mechanical phase scanning methods [46]. Alternatively, use of an AOM could easily push ∆ω into the MHz or GHz range, where demodulation could be done with analog electronics, greatly improving the lock-in detection and enhancing SNR. In this case imaging speed would only be limited by the scan speed of the beam angle.
Conclusion
We have presented a method for lensless, single pixel, interference pattern structured illumination imaging using a mechanical angle scan. We derived a signal equation for our technique (and generally applicable to most two beam IPSII techniques) that includes effects from distortions in the wavefronts of the illumination. Our derivation describes how IPSII effectively measures the Fourier transform of the product of an object and two beam mode functions. We also discussed how this could be used to measure either an object, the mode of a laser, or to hologhically measure a 3D complex object. We demonstrated imaging of objects and the intensity of a laser by imaging 1D profiles of a laser, the shadow of a 1D test target, and 2D imaging of a resolution test target.
Our technique varies from related IPSII techniques in that it only requires simple flat optics (beam-splitters and mirrors). It does not require a lens, acousto-optics or custom engineered diffraction gratings. We generate the variable angles needed for IPSII instead by using a mechanical angle scan. This severely limits the speed of the process, making it an inferior candidate for many optical imaging applications. However, the lack of lens or other complicated optics could make it useful for a variety of cases and make it an appealing candidate for imaging with deep UV, X-rays, or other waves for which focusing elements are unavailable or impractical. Because it removes many of the technical complications that exist in related IPSII techniques, it may also be used to more easily isolate and study issues related to IPSII imaging, such as wavefront distortions, shadows, positioning errors in k-space, etc. It is relatively easy and inexpensive to implement compared to other IPSII techniques, and could be useful for low-cost high-resolution imaging applications where speed is less critical.
Funding
Brigham Young University's College of Physical and Mathematical Sciences; The National Defense Education SMART Fellowship program. | 6,183.8 | 2019-05-09T00:00:00.000 | [
"Physics"
] |
Impact of COVID-19 on Smallholder Poultry Farmers in Nigeria
: In sub-Saharan Africa, most households in rural communities keep smallholder poultry, and are exposed to harsh socio-economic conditions caused by COVID-19 pandemic due to the vulnerability of their production systems to crisis. This study assessed the impact of COVID-19 on 525 smallholder poultry farmers in five states of Nigeria. The study was conducted 15 months after the onset of the pandemic in Nigeria using structured questionnaires focused on socio-demography, income, production systems, markets, and food security. Average household size increased from 6.9 before COVID-19, to 8.3 during COVID-19, representing a 20.3% increase in population growth. Over half (52.6%) of this increase was due to childbirths. Average monthly income before and during the pandemic was reduced from NGN 22,565 (USD 62.70) to NGN15,617 (USD 38.10), respectively. During the pandemic, there was a 28.4% increase in the number of farmers living below the international poverty line of USD 1.90 per day. In addition, reliance on chickens for food and income was significantly ( p < 0.05) impacted by gender, location, household size, and monthly income. These results show that the COVID-19 pandemic had a significant effect on the livelihoods and food security of farmers, and the findings are essential in developing appropriate post-COVID-19 interventions for smallholder poultry production in Nigeria.
Introduction
Globally, the COVID-19 pandemic has caused major disruptions to several agricultural (livestock, crop, and horticulture) activities across different production systems. Consequently, this resulted in significant hardships and economic losses to households, particularly smallholder farmers who are less resilient and more vulnerable to shocks and disturbance within the production system [1,2]. The impact of COVID-19 on smallholder poultry households in sub-Saharan Africa is of particular importance because over 80% of all households keep poultry as a source of livelihood and food security [3]. Smallholder poultry is largely a subsistence-oriented poultry keeping of unimproved or improved dual-purpose (i.e., for eggs and meat) chicken breeds raised under scavenging or semi-scavenging production systems, using family labour and locally available feed resources [4][5][6][7]. In Nigeria, smallholder poultry accounts for between 65 and 77% of total poultry holdings [8][9][10], and women are the primary keepers [11,12] and main actors within the value chain.
In curtailing the outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in Nigeria, government at both the federal and state level imposed and enforced drastic public health measures such as curfews, movement restrictions, lockdowns (partial and total), social distancing, ban on interstate travel, and closure of markets [13]. These public health measures impacted differently the lives, activities, and economy of people dwelling in different parts of the country [14]. This is partly due to the lack of synergy and cohesiveness between the government at the federal and state level on COVID-19 policy formulation, the extent of implementation, and the type of enforcement required [15]. The contradictory policies and varying levels of enforcement in managing The sampling represented over 40% of the total number of households from th ACGG smallholder poultry baseline survey (1200) (Alemayehu et al. 2018). Eligibility c teria included persons aged 18 years old and above, and having previously participate in both the baseline survey and on-farm activities during the ACGG project [24]. Partic pation in the ACGG project was an important criterion for household selection so as allow an unbiased comparison of the results from the baseline survey with this study results. For this study, respondents were defined as persons primarily responsible f keeping chickens in the household. All the respondents provided informed consent participate in the survey. The study was approved by the Review Committee of th CGIAR COVID-19 Hub: ILRI Nigeria 2021. The survey was conducted in 17 days, betwee 27 May and 13 June 2021. This was 15 months after the first documented case (27 Februar Reprinted with permission from ref. [23]. Copyright 2019 Federal Department of Forestry. A total of 525 households were selected. The model for the sampling was: 5 states → 3 senatorial districts per state → 3 local government areas (LGA) per state → 3 villages per state → 35 households per village → 105 households per state .
For adequate distribution of the households selected in each state, one LGA was purposively selected from each of the three senatorial districts. Each of the three villages was then randomly selected from the respective LGA.
The sampling represented over 40% of the total number of households from the ACGG smallholder poultry baseline survey (1200) (Alemayehu et al. 2018). Eligibility criteria included persons aged 18 years old and above, and having previously participated in both the baseline survey and on-farm activities during the ACGG project [24]. Participation in the ACGG project was an important criterion for household selection so as to allow an unbiased comparison of the results from the baseline survey with this study's results. For this study, respondents were defined as persons primarily responsible for keeping chickens in the household. All the respondents provided informed consent to participate in the survey. The study was approved by the Review Committee of the CGIAR COVID-19 Hub: ILRI Nigeria 2021. The survey was conducted in 17 days, between 27 May and 13 June 2021. This was 15 months after the first documented case (27 February, 2020) of COVID-19 in Nigeria [25].
Research Hypothesis
Null hypothesis: There is no significant impact of the COVID-19 pandemic on monthly income, poverty status, food security, flock size, accessibility of markets, sale of live birds/eggs, or household sizes of smallholder poultry farmers in the five agroecological zones of Nigeria.
Alternate hypothesis: The monthly income, poverty status, food security, flock size, accessibility of markets, sale of live birds/eggs, and household sizes of smallholder poultry farmers surveyed in the five agroecological zones of Nigeria are significantly impacted by the COVID-19 pandemic.
Data Collection and Analysis
This survey was a cross-sectional study conducted using a structured questionnaire with questions in the following areas: socio-demography, household income, flock size and production, markets, information and extension services, and food and consumption. The questions focused on the period before the COVID-19 outbreak (January 2020) and during COVID-19 (February 2020-May 2021). The questionnaires were interviewer administered by trained field officers who visited each of the participants in their respective households. Due to varying levels of COVID-19 restrictions on gatherings in each of the five states, focused group discussions and key informant interviews were not conducted. All the field officers had previously participated in the ACGG project and were familiar with the farmers, local languages, communities, and practice of smallholder poultry production. Each field officer was assigned to a village. Data were collected on smartphones using Google Forms and stored in real-time on Google cloud services. Geolocations of the study areas were also captured [26].
Data were subjected to descriptive (mean, standard deviation, frequency, percentage) and inferential statistics (Chi square (χ 2 ), Wilcoxon signed-rank test, Kruskal-Wallis H one-way analysis-of-variance (ANOVA) test). The following mathematical models [27] were used for the inferential analysis: where df = degrees of freedom, O i = observed value in category i, E i = expected value in category i, and χ 2 = Chi-square value. The Chi-square test was used to test the null hypothesis that there is no significant association between the independent variables (gender, location) and the dependent variables (poverty status, flock size, reliance on chickens, accessibility of markets, sale of live birds and eggs), before and during the COVID-19 pandemic.
and for the assumption of tied data pairs : where z is the test statistic tested against the Z-score, n is the number of sample pairs grouped either by gender (male, female) or region (north, south) before and during COVID-19, and R i is the (signed) rank of the absolute difference between the ith pair of values. This was used to test the null hypothesis that there is no systematic difference in the mean of the paired observations obtained for household size, average monthly income, and flock size before and during COVID-19 by gender and region.
Kruskal − Wallis H one − way ANOVA : H = 12 where H is the Kruskal-Wallis test statistic for the non-parametric one-way ANOVA (H > 2 groups), N is the total number of observations across all groups (location), T i is the sum of ranks of observations in the ith sample, and n i is the number of observations in the group i (i = 1, 2, . . . , I). For tied observations, a correlation factor was applied to H and mean ranks were used in comparing the group effect before and during COVID-19.
where t i is the number of observations within the groups of tied ranks. The null hypothesis test was that there is no difference among the five states (location) with respect to household size, monthly income, and flock size before and during COVID-19.
In addition, multivariate logistic regression analysis was used to test the degree of associations (odds ratio) between respondents' reliance on chickens (binary variable) and some predictors (location, gender, household size, monthly income).
where X is the jth predictor for the jth case representing the covariates X 1 , X 2 , . . . , X n ; β 0 is the intercept; β 1 X 1 to β n X n are the regression coefficients that represent log odds; n is the number of predictors; and π is the expected proportional outcome for the binary response variable. This was tested against the null hypothesis that there is no significant relationship between farmers' reliance on chickens and location, gender, household size, and monthly income. Effect size (Cohen's d) was determined using the Wilcoxon signed-rank test parameters to quantify the effect of COVID-19 on respondents' monthly income, household size, and flock size before and during the pandemic. Data were analysed in R [28] version 3.6.2 and Statistical Package for Social Sciences (SPSS version 20). Prior to analysis, data were tested for normality using the Shapiro-Wilk test. Data visualisations were presented using MS-Excel (Office 2019) and R.
Gender and Age of Respondents
The results of this study represent a total of 525 households, and two-thirds (66%) of the respondents were female (n = 346) whereas over one-third (34%) were male (n = 179) ( Figure 2). There were more female respondents than male in all the states except Imo State. Nasarawa (82%) had the highest percentage of female respondents, followed by Kebbi (73%), Rivers (69%), Kwara (56%), and Imo (49%) states. The average age of the respondents was 51.0 ± 21.6 years. The majority of the respondents (70.1%) were aged between 40 and 69 years old. Respondents aged 18 to 39 years and 70 years and above were 18.3% and 11.6%, respectively. and flock size before and during the pandemic. Data were analysed in R [28] version 3.6.2 and Statistical Package for Social Sciences (SPSS version 20). Prior to analysis, data were tested for normality using the Shapiro-Wilk test. Data visualisations were presented using MS-Excel (Office 2019) and R.
Gender and Age of Respondents
The results of this study represent a total of 525 households, and two-thirds (66%) of the respondents were female (n = 346) whereas over one-third (34%) were male (n = 179) ( Figure 2). There were more female respondents than male in all the states except Imo State. Nasarawa (82%) had the highest percentage of female respondents, followed by Kebbi (73%), Rivers (69%), Kwara (56%), and Imo (49%) states. The average age of the respondents was 51.0 ± 21.6 years. The majority of the respondents (70.1%) were aged between 40 and 69 years old. Respondents aged 18 to 39 years and 70 years and above were 18.3% and 11.6%, respectively.
Household Size
Average household size before and during COVID-19 was 6.95 ± 2.14 and 8.29 ± 2.56, respectively ( Table 1). The effect of COVID-19 on average household size was moderate (d = 0.62) and statistically significant (Z = −14.29, p ≤ 0.001). Table 1 shows that across the locations, the percentage increase in average household size in the period before and during COVD-19 was approximately 35.7% (Rivers), 24.5% (Kwara), 23.6% (Imo), 10.1% (Kebbi), and 9.4% (Nasarawa). Respondents in the northern region had a lower (13.2%) percentage increase compared to those in the south (30.3%). Both male (19.7%) and female (19.2%) respondents had about a 20% increase in average household size in the period before and during COVID-19. Gender, location, and region of the respondents had a statistically significant (p ≤ 0.001) association with household size (Figure 3).
Household Size
Average household size before and during COVID-19 was 6.95 ± 2.14 and 8.29 ± 2.56, respectively ( Table 1). The effect of COVID-19 on average household size was moderate (d = 0.62) and statistically significant (Z = −14.29, p ≤ 0.001). Table 1 shows that across the locations, the percentage increase in average household size in the period before and during COVD-19 was approximately 35.7% (Rivers), 24.5% (Kwara), 23.6% (Imo), 10.1% (Kebbi), and 9.4% (Nasarawa). Respondents in the northern region had a lower (13.2%) percentage increase compared to those in the south (30.3%). Both male (19.7%) and female (19.2%) respondents had about a 20% increase in average household size in the period before and during COVID-19. Gender, location, and region of the respondents had a statistically significant (p ≤ 0.001) association with household size (Figure 3).
Before the pandemic, about two-thirds (65.9%) of the respondents had average household sizes ranging between six and nine, whereas household sizes ranging between one and two, three and four, and 10 and above were reported by 1.5%, 22.7%, and 9.9% of the respondents, respectively ( Table 2). During the pandemic, the distribution of respondents across the range of household sizes was 0.2% (1-2), 14.1% (3-4), 54.3% (6-9), and 31.4% (10 and above). Table 2 shows a statistically significant (p < 0.05) distribution of respondents across a range of household sizes by gender and location in the period under study.
Figure3
* p < 0.05; ** p < 0.01; **** p < 0.0001; ns, not significant ** p < 0.01; *** p < 0.001; **** p < 0.0001; ns, not significant Figure 4 shows that more than half (57%) of the respondents experienced an increase in household size during COVID-19, either due to new childbirths or the arrival of relatives, and a larger proportion (88.8%) of these respondents experienced about a 50% increase in average household size (Table 2). Table 3 shows that over half (56.7%) of the female respondents experienced close to a 100% increase in average household size compared to males (50.3%). Gender (F = 4.79, p = 0.03) and location (F = 12.10, p = 0.00) had a statistically significant influence on the percentage increase in average household size. Tables 4 and 5 highlight the percentage distribution of respondents according to the number of persons added to the households through childbirth and the arrival of relatives during the pandemic. The number of persons added to the households through childbirth ranged from one (22.5%), to two (13.5%), to three and above (16.6%), whereas the number of persons added to the households through arrival of relatives ranged from one, to two, to three, to over three in 13.3%, 15.2, 1.5% and 12.4% of the respondents' households, respectively. Gender was found to have a statistically significant (χ 2 = 10.63, p = 0.01) effect only on the number of children added to households by birth, but location significantly influenced both the number of persons added through childbirth (χ 2 = 72.24, p = 0.00) and the arrival of relatives (χ 2 = 102.30, p = 0.00). Tables 4 and 5 highlight the percentage distribution of respondents according to the number of persons added to the households through childbirth and the arrival of relatives during the pandemic. The number of persons added to the households through childbirth ranged from one (22.5%), to two (13.5%), to three and above (16.6%), whereas the number of persons added to the households through arrival of relatives ranged from one, to two, to three, to over three in 13.3%, 15.2, 1.5% and 12.4% of the respondents' households, respectively. Gender was found to have a statistically significant (χ 2 = 10.63, p = 0.01) effect only on the number of children added to households by birth, but location significantly influenced both the number of persons added through childbirth (χ 2 = 72.24, p = 0.00) and the arrival of relatives (χ 2 = 102.30, p = 0.00).
Income
As shown in Table 1, the average monthly income of the respondents before and during COVID-19 was NGN 22,564.95 ± 18,623.52 (USD 62.7) and NGN 15,616.76 ± 15,610.21 (USD 38.1), respectively. COVID-19 had a moderate (d = 0.66) and statistically significant (Z = −15.12, p ≤ 0.001) effect on the average monthly income of the respondents. There was a statistically significant (p ≤ 0.001) effect of gender and location on average monthly income, but region had no statistically significant (p > 0.05) effect ( Figure 5). The percentage reduction in average monthly income in the period before and during COVD-19 ranged from 17.4% (Kebbi) to 43.6% (Nasarawa), and from 27.9% (North) to 35.0% (South) across location and region, respectively (Table 1). Males and females both had about a 30.8% reduction in average monthly income.
During the pandemic, two-thirds of the respondents were observed to be living below the average monthly income, and women account for a greater percentage (73.2%) of this group of persons ( Figure 6). As shown in Figure 6, a vast majority of respondents (79.4%) reported a reduction in average monthly income during the pandemic whereas only few (7.4%) experienced an increase. However, some (13.1%) reported no changes in their average monthly income (Figure 7). Table 6 shows the distribution of respondents along a range of average monthly income before and during COVID-19. Gender and location were significantly (p < 0.05) associated with the percentage distribution of respondents across the range of average monthly income. Figure 5 * p < 0.05; ** p < 0.01; **** p < 0.0001; ns, not significant ** p < 0.01; *** p < 0.001; **** p < 0.0001; ns, not significant (79.4%) reported a reduction in average monthly income during the pandemic whereas only few (7.4%) experienced an increase. However, some (13.1%) reported no changes in their average monthly income (Figure 7). Table 6 shows the distribution of respondents along a range of average monthly income before and during COVID-19. Gender and location were significantly (p < 0.05) associated with the percentage distribution of respondents across the range of average monthly income. Respondents' Income Relative to Poverty Line Table 7 shows the percentage of respondents whose average monthly income was below and above the international poverty line (IPL) of USD 1.90 per day (2011 PPP) [30] before (equivalent of NGN 684/day or NGN 20,520/month) and during (equivalent of NGN 779/day or NGN 23,370/month) COVID-19. The proportion of women living below the IPL before and during the pandemic was 69.9% and 84.1%, respectively ( Figure 8). The proportion of respondents living below the IPL before COVID-19 ranged from 42.9% (Kwara) to 76.2% (Rivers), whereas during the pandemic, it ranged from 66.7% (Kwara) to 92.4% (Nasarawa). Before and during COVID-19, gender, location, and household size significantly (p < 0.05) influenced the distribution of respondents in relation to IPL. Table 8 shows respondents whose average monthly income was above the IPL before COVID-19, but moved below the IPL during COVID-19. This move impacted more females (57) than males (43) and the effect of gender was statistically significant (χ 2 = 4.31, p = 0.04). Kwara (27.6%), Imo (24.8%), and Nasarawa (23.8%) states accounted for over 70% of the number of respondents who before COVID-19 were earning above the IPL, but during COVID-19 had moved below the poverty line. This move was also significantly influenced by location (χ 2 = 21.10, p = 0.00). Table 8 shows respondents whose average monthly income was above the IPL before COVID-19, but moved below the IPL during COVID-19. This move impacted more females (57) than males (43) and the effect of gender was statistically significant (χ 2 = 4.31, p = 0.04). Kwara (27.6%), Imo (24.8%), and Nasarawa (23.8%) states accounted for over 70% of the number of respondents who before COVID-19 were earning above the IPL, but during COVID-19 had moved below the poverty line. This move was also significantly influenced by location (χ 2 = 21.10, p = 0.00). Table 8. Respondents whose monthly income was above the international poverty line before COVID-19 but below the poverty line during COVID-19. With respect to the national poverty line of NGN 137,430 per annum (NBS 2020), 35.6% and 53.3% of the respondents were living below the line (i.e., NGN 11,425.50/month) before and during COVID-19, respectively. This represents a 50% increase in the number of respondents within the poverty bracket during COVID-19. According to this national poverty level, 28% of the farmers (338) previously outside the national poverty line before COVID-19 had plunged into poverty. That is, 95 farmers were plunged into poverty.
Contribution of Chicken to Food Security and Income during the Pandemic
Majority of the respondents' households (75%) relied more on chickens for food and income during COVID-19. Sixty-four percent of these households (394) were of female respondents compared to male (36%) respondents. Rivers State had the highest number of households (93) that were dependent on chickens, followed by Nasarawa (89), Kwara (76), Kebbi (70), and Imo (66) states. Table 9 highlights the relationship between household reliance on chickens (Yes/No), and location, gender, household size (arrival of relatives/childbirth), and monthly income (increase/decrease/no change). The model shows that location, gender, household size, and monthly income were good predictors of respondents' reliance on chickens (β = 1.101, df = 1, p < 0.05). During the pandemic, location, gender, increase in household size due to arrival of relatives, and changes in monthly income were 0.835, 0.574, 1.227, and 0.625 times (odds ratios) more likely to influence (p < 0.05) reliance on chickens, respectively. Tables 10 and 11 show a bivariate distribution of the reliance on chickens against changes in average monthly income (χ 2 = 18.71, p = 0.00) and household sizes (χ 2 = 13.62, p = 0.00) during COVID-19. Over 80% of the respondents who reported a reduction in average monthly income during the pandemic relied more on chickens for money and food. A vast majority of the respondents (88.4%) with household sizes greater than six relied more on chickens for money and food during COVID-19.
Household Flock Size
Average flock size per household was 29.65 ± 2.49 and 30.28 ± 2.97 before and during the pandemic, respectively ( Table 1). The difference in flock size was statistically significant (Z = −4.74, p ≤ 0.001), and COVID-19 had a small effect (d = 0.21) on the observed changes in average flock size. Respondents in Imo State had the highest (4.2%) percentage increase in flock size (Nasarawa, 3.7%; Kebbi, 2.8%; Kwara, 0.9%), whereas there was a 1.0% reduction in flock size in Rivers State. Respondents residing in the northern part of the country had a higher (2.5%) percentage increase in flock size compared to those in the south (1.6%). Figure 9 shows that the average flock size was significantly (p ≤ 0.001) influenced by location and region. Gender had no statistically significant (p > 0.05) effect on average flock size. * p < 0.05; ** p < 0.01; **** p < 0.0001; ns, not significant Table 12 shows that the majority (76.2%) of the respondents reported an increase in flock size during the pandemic through the hatching of chicks by local hens (50.5%), the purchase of DOC (22.8%), or gifts (2.9%). Compared to males (27.7%), more female respondents (53.8%) increased their flock size through the hatching of chicks by local hens. On the other hand, about one-third (32.5%) of male respondents' flock size increased by purchase of day-old chicks compared to females (17.9%). A greater percentage (62.3%) of respondents attributed the decrease in flock size to the consumption and sale of live birds. About a quarter (24.8%) of the respondents associated the decrease with theft, mortality, and predation, whereas less than one-sixth (13.0%) gave away chickens as gifts or donations. Sale of chickens contributed most to the decrease in flock size for all states except Kwara and Rivers states, where gifts/donations and consumption most influenced the decrease in flock size, respectively. Gender and location significantly (p < 0.05) influenced changes (increase and decrease) in respondents' flock size during the pandemic. Tables 13 and 14 show that gender and location significantly (p < 0.05) influenced respondents' accessibility to markets and the sale of chicken products during the pandemic. Most of the respondents (84.6%) indicated that COVID-19 restrictions and lockdowns negatively impacted access to markets for the sale of live birds and eggs. This resulted in fewer sales for over two-thirds of the respondents (69.3%) ( Table 12). Only 17.5% of the respondents had more sales, and 13.2% of the respondents observed no difference in sales during the pandemic. The restrictions reduced the accessibility of markets for more females (64%) than males (36%) ( Table 12). Kebbi State had the highest percentage (99%) of respondents whose access to markets was affected by the restrictions and lockdowns, followed by Imo (96.2%), Kwara/Rivers (91.4%), and Nasarawa (44.8%) states. Access to agricultural extension services was equally impacted for a vast majority of the respondents (88.4%).
Discussion
The study participants were selected from locations (Kebbi, Nasarawa, Kwara, Imo, and Rivers states) representative of different agroecology and vegetation zones, where twothirds of the total poultry population in Nigeria are being raised under smallholder poultry production systems [31,32]. In addition, the participants reflected certain socio-cultural (e.g., childbirth, communal living, family sizes) and socio-economic (e.g., income, markets, agricultural activities) characteristics of households within the respective geographical regions (North West: Kebbi; North Central: Kwara, Nasarawa; South East: Imo; South South: Rivers) [33][34][35].
For this study, there was a distinction between respondents who were the farmers keeping the chickens in the households and the household heads. However, there were respondents (43%) who were both the farmer and the household head. This occurred more with male respondents (88.3%) than with females (19.7%), and the majority of these female respondents resided in Rivers (24) and Imo (18) states. This agrees with previous reports that there are fewer female-headed smallholder poultry households in Nigeria, with Rivers State having a significant proportion of these households compared to the other states [21,36]. The implication of this disparity with respect to women's empowerment, gender roles (chicken ownership, decision making), and production efficiency within households needs to be further investigated in a post-COVID-19 era. Overall, this study confirms that women are the primary producers of smallholder poultry in Nigeria [12].
There was a 20.3% increase in average household size before (6.9) and during (8.3) COVID-19. The average household size before the pandemic was similar to that observed (6.5) in the baseline study (December 2015) conducted during the ACGG project [21]. This shows that between December 2015 and January 2020 (a four-year period), there was a 6.2% increase in household size. The percentage increase in household size within that fouryear period is lower than the 20.3% increase in average household size observed within the 15-month (February 2020-May 2021) COVID-19 period of this study. The alarming increase in household size within this study period was attributed to childbirths and the arrival of relatives. The arrival and welcoming of relatives and extended kin during periods of crisis (death, job loss, insurgency) has been described as an intricate but essential ingredient of societal responsibilities of African, and in particular Nigerian, households [37,38]. Compared with the increase in household size by the addition of relatives, childbirths accounted for over half (52.6%) of the total increase in household size. This is consistent with the ranking of Nigeria as third on the list of countries with the highest expected number of births during the pandemic [39]. Ranking of the states by childbirth was as follows: Kwara (63.8%), Rivers (61.2%), Kebbi/Nasarawa (48.6%), and Imo (40%). This implies that households in the northern part of Nigeria had more childbirths during the pandemic than those in the south and follows a similar trend reported by NDHS [35] for fertility rates prior to the pandemic.
Farmers' average monthly income was reduced significantly, by about one-third (31%), before (NGN 22,565/USD 62.70) and during (NGN 15,617/USD 38.10) the pandemic. However, the average monthly income (NGN) during COVID-19 was higher than that previously reported for similar smallholder households in Nigeria (NGN 15,100) [12]. This may be attributed to current changes in the price of agricultural commodities (e.g., eggs, meat, live birds) and food inflation, which smallholder poultry farmers may have also profited from [40,41]. During the pandemic, a vast majority of the farmers (78.5%) were living below the international poverty line (USD 1.90/person/day), and there were more poor women (71%) than men (29%). Compared with the southern regions (South East, South South), farmers in the northern parts (North West, North Central) of Nigeria formed a larger proportion (60%) of those living below the poverty line. This corroborates a recent report on the outlook of poverty in Nigeria post COVID-19 that predicted a higher risk of poverty in the north compared to the south [42,43]. There was a 28.3% increase in the number of smallholder poultry farmers living below the poverty line during (412) the pandemic compared to before the pandemic (321). This is within the projected increase in poverty rate due to COVID-19 for the period 2020 (12.8%) to 2022 (45.2%) in Nigeria [42][43][44][45]. Within the 15-month period (February 2020 and May 2021), 100 smallholder poultry farmers (Male: 43, Female: 57) were newly plunged into poverty. This represents about half (49%) of the total number of the farmers (204) living above the poverty line before COVID-19.
ACGG intervention in Nigeria (2015-2019), contributed significantly to bringing smallholder poultry farmers out of poverty [12]. However, due to COVID-19, some of these farmers were plunged back into poverty. This suggests that about half of the gains made through the ACGG interventions have been lost, and the remaining gains are equally at risk of being eroded by the prolonged outbreak of the COVID-19 pandemic.
In addition to location and gender, there was a strong relationship between household size and poverty status both before and during COVID-19. This finding is in agreement with previous studies on the relationship between household size and poverty in rural communities of Nigeria [46,47]. Farmers with household sizes ranging between three and five and six to nine constituted the highest percentage of those living above and below the poverty line, respectively. This suggests that population growth at household levels increases the risk of poverty for smallholder poultry farmers.
Most of the farmers (75%) relied more on chickens as a source of food and income during the pandemic, and Rivers State had the highest percentage (88.6%) of farmers dependent on chickens. The observed reliance on chickens in Rivers State suggests an increased pressure on food and money caused by the high increase (36%) in average household size compared to the other states. During the pandemic, changes in average monthly income and household size significantly influenced farmers' reliance on chickens for household food and income. Compared with those who experienced no change in reliance on chickens, farmers who relied more on chickens had a larger reduction (42%) in average monthly income, as well as a higher increase (17.7%) in household size during COVID-19 than before COVID-19. This agrees with the reports that smallholder chickens are valuable assets that provide sustenance and supplementary income [48,49] during emergencies such as the COVID-19 pandemic. Increase in household size due to arrival of kin, and not childbirth, was a significant predictor of farmers' reliance on chickens. This was not surprising, as the presence of more grown-ups in the households increased the demand for food, and chickens are considered a primary choice for low-cost animal protein sources in low-to-middle income countries (LMIC) [50][51][52].
As expected, the increased reliance on chickens by farmers during the pandemic resulted in a decrease in flock size, mostly through sale of live birds and consumption of chickens/eggs. About two-thirds (64.3%) of the farmers (207) whose flock size was reduced due to the sale of live birds were from the northern region of the country. On the other hand, reduction in flock size through consumption of chickens or eggs was highest (54.2%) in the southern parts. This observation is in consonance with the findings of Alabi [12], where chicken and egg consumption were highest in the South South (Rivers) and South East (Imo) states of Nigeria. The result also aligns with the objectives of keeping chickens across the agroecological zones [21]. This implies that irrespective of the COVID-19 pandemic, the purpose of keeping smallholder poultry has not changed.
In spite of the reduction in flock size, average flock size increased by 2.0% during (30.3) the pandemic compared to before (29.7) the outbreak of COVID-19. The average flock sizes observed in this study were higher than those previously reported (28) for a similar group of farmers sampled within the same locations [21].
During COVID-19, less than a quarter (23%) of the farmers purchased day-old chicks, whereas over half (51%) of the farmers resorted to using local hens to brood and hatch chicks for re-stocking. This was largely due to COVID-19-related disruptions in the supply chain of the day-old chicks of the farmer-preferred [11], dual-purpose chicken breeds (FUNAAB Alpha, Noiler, Sasso) [7,22]. This agrees with an earlier report on the logistical challenges associated with poultry input supply chains during the pandemic [15]. Gender played a significant role in the re-stocking options available to farmers during COVID-19, as more women purchased day-old chicks (52%) and used local hens (70%) to increase their flock size, compared to men.
At the outbreak of the pandemic, when curfews, lockdowns, and movement restrictions were in place, there was limited access for most women (64%) to trade at the markets or hawk along the highways. Across each of the states, except Nasarawa (45%), a vast majority (90%) of the farmers were challenged by the inaccessibility of the markets. Nasarawa State also had the lowest percentage of farmers (45%) affected by the restrictions, lockdowns, and closure of markets. This suggests a flexibility in the enforcement of the government's policy on the closure of markets and movement restrictions. It may also be indicative of the different marketing channels available to smallholder poultry producers in Nasarawa State as well as their ingenuity compared to the other states. As may be expected, Nasarawa State had the highest proportion (38%) of farmers who experienced an increase in the sale of live birds and eggs during the pandemic. However, overall, farmers experienced less sales during the pandemic, and this contributed to the changes in the average monthly income.
Conclusions
This study identified the impact of the COVID-19 pandemic on smallholder poultry production in Nigeria, and highlighted its threat to household livelihoods and food security. These findings provide an update on the existing baseline surveys on smallholder poultry production in Nigeria, and are essential for designing appropriate interventions in a post-COVID-19 era. The study emphasises a strong association between gender, location, household size, and monthly income on the livelihoods and food security of smallholder poultry farmers.
Overall, women were more affected by the pandemic than men. Ranking the states based on the observed impact of COVID-19 on average monthly income, poverty rate, household size, flock size, reliance on chickens, sales, and accessibility of markets during the pandemic placed Kwara State (Southern Guinea Savanna) as the most impacted, followed by Rivers State (Mangrove/Freshwater Swamp Forest), Kebbi State (Sudan Savanna), Nasarawa State (Northern Guinea Savanna), and Imo State (Lowland Rainforest/Derived Savanna).
Compared with the other states, Kebbi and Nasarawa states had the highest percentage of female smallholder poultry farmers living below the poverty line. Based on the results provided by this study, the gender implication of the impact of COVID-19 on the production and productivity of smallholder poultry farmers requires a more detailed investigation.
In view of the challenges observed during the pandemic and the unique characteristics of the smallholder poultry production system, there is a need to re-examine the smallholder poultry value chain (SPVC) along geographical regions. This has a possibility of identifying specific distinguishing nodes and actors for redefining the SPVC within the regions. It will also create new marketing channels and supply chains for improved efficiency within the production system.
Considering the role of local chickens during the COVID-19 pandemic, interventions aimed at improving smallholder poultry production systems through the availability of low-input high-output genetics should integrate the conservation and genetic improvement of local chickens. Poultry breeding schemes specific to smallholder poultry may require a considerable modification of the well-established, commercial-oriented breeding scheme set up to accommodate community-based breeding programmes. | 8,576.4 | 2021-10-17T00:00:00.000 | [
"Economics"
] |
Transimpedance Amplifier for Noise Measurements in Low-Resistance IR Photodetectors
: This paper presents the design and testing of an ultra-low-noise transimpedance ampli-fier (TIA) for low-frequency noise measurements on low-impedance (below 1 k Ω ) devices, such as advanced IR photodetectors. When dealing with low-impedance devices, the main source of background noise in transimpedance amplifiers comes from the equivalent input voltage noise of the operational amplifier, which is used in a shunt–shunt configuration to obtain a transimpedance stage. In our design, we employ a hybrid operational amplifier in which an input front end based on ultra-low-noise discrete JFET devices is used to minimize this noise contribution. When using IF3602 JFETs for the input stage, the equivalent voltage noise of the hybrid operational amplifier can be as low as 4 nV/ √ Hz, 2 nV/ √ Hz, and 0.9 nV/ √ Hz at 1 Hz, 10 Hz, and 1 kHz, respectively. When testing the current noise of an ideal 1 k Ω resistor, these values correspond to a current noise contribution of the same order as or below that of the thermal noise of the resistor. Therefore, in cases in which the current flicker noise is dominant, i.e., much higher than the thermal noise, the noise contribution from the transimpedance amplifier can be neglected in most cases of interest. Test measurements on advanced low-impedance photodetectors are also reported to demonstrate the effectiveness of our proposed approach for directly measuring low-frequency current noise in biased low-impedance electronic devices.
Introduction
Noise measurements can be extremely useful in the characterization of electronic devices, since a proper interpretation of the measured noise spectra can provide information on their quality and reliability [1][2][3][4][5].In the case of sensor devices such as photodetectors, reliable measurements of their current noise characteristics are needed to assess the value of fundamental parameters connected with their responsivity and noise [6,7].Their values depend on their bias conditions, temperature, and frequency [8][9][10].Noise models exist that might allow the estimation of such parameters, but they are generally quite inaccurate, because the low-frequency part of the noise spectrum is the most difficult to estimate and to correlate with the device structure [11]; therefore, actual noise measurements on each sample are required to obtain sensible results [12,13].
Low-frequency noise measurements on electronic devices can be performed in two configurations: voltage noise measurements through the Device Under Test (DUT) biased with a constant current I B ; or measurements of the current noise through the DUT biased with a constant voltage V A [14].The measurement of the Power Spectral Density (PSD) of the voltage noise S VD or the current noise S ID provides the very same informa-tion, as long as the current-voltage characteristic I B (V B ) of the DUT and its small signal impedance Z D at the selected bias point and vs. frequency are known, since [15,16]: It may be worth mentioning that throughout this paper, whenever quantities representing the PSD of voltage or current fluctuations appear in an equation, it is intended that these values be expressed in the appropriate units, V 2 /Hz and A 2 /Hz.However, as is common practice in the field of noise measurements and instrumentation, when referring to specific values in the text, we will often express these quantities in terms of their square roots (i.e., V/ √ Hz and A/ √ Hz).From an experimental point of view, it is easier to perform voltage noise measurements in the case of low-impedance DUTs, while current noise measurements are restricted to cases of high-impedance DUTs [17,18].The advanced photodetectors we plan to investigate are characterized by impedances that can be on the order of or even below 1 kΩ, and thus belong to the class of devices for which voltage noise measurements are more easily performed.The problem, however, is that some interesting device parameters are more directly related to the current noise [19].Using Equation ( 1) to obtain S ID from S VD , however, requires detailed knowledge of the device impedance, which therefore needs to be accurately measured under exactly the same environmental and bias conditions as those under which the noise voltage spectrum was obtained.This procedure can become extremely time consuming and prone to error, since two different measurement steps and setups (for noise measurement and for impedance measurements) are involved.
For this reason, we decided to explore the feasibility of the realization of an ultra-lownoise transimpedance amplifier dedicated to the reliable measurement of low-frequency current noise in low-impedance devices.In Section 2, the proposed approach for the design of the TIA is presented; in Section 3, details on the circuit implementation are given; in Section 4, the results of measurements performed using photodetectors as DUTs are presented; and in Section 5, conclusions are drawn.
Proposed Approach
To better understand the challenges that must be faced when designing a transimpedance amplifier for performing current noise measurements on a low-impedance DUT, the simplified circuit presented in Figure 1 It may be worth mentioning that throughout this paper, whenever quantities representing the PSD of voltage or current fluctuations appear in an equation, it is intended that these values be expressed in the appropriate units, V 2 /Hz and A 2 /Hz.However, as is common practice in the field of noise measurements and instrumentation, when referring to specific values in the text, we will often express these quantities in terms of their square roots (i.e., V/√Hz and A/√Hz).
From an experimental point of view, it is easier to perform voltage noise measurements in the case of low-impedance DUTs, while current noise measurements are restricted to cases of high-impedance DUTs [17,18].The advanced photodetectors we plan to investigate are characterized by impedances that can be on the order of or even below 1 kΩ, and thus belong to the class of devices for which voltage noise measurements are more easily performed.The problem, however, is that some interesting device parameters are more directly related to the current noise [19].Using Equation (1) to obtain SID from SVD, however, requires detailed knowledge of the device impedance, which therefore needs to be accurately measured under exactly the same environmental and bias conditions as those under which the noise voltage spectrum was obtained.This procedure can become extremely time consuming and prone to error, since two different measurement steps and setups (for noise measurement and for impedance measurements) are involved.
For this reason, we decided to explore the feasibility of the realization of an ultralow-noise transimpedance amplifier dedicated to the reliable measurement of low-frequency current noise in low-impedance devices.In Section 2, the proposed approach for the design of the TIA is presented; in Section 3, details on the circuit implementation are given; in Section 4, the results of measurements performed using photodetectors as DUTs are presented; and in Section 5, conclusions are drawn.
Proposed Approach
To better understand the challenges that must be faced when designing a transimpedance amplifier for performing current noise measurements on a low-impedance DUT, the simplified circuit presented in Figure 1 is considered.
In this system, the device under test (ZD) is biased at a constant voltage VB because of the virtual short circuit between the inputs of the operational amplifier OA1.The transimpedance gain between the DUT current source iD and the output voltage of OA1 is set by the resistor RR.In choosing the value of RR, the linearity range of OA1 must be taken into account, since the DC current flowing through the DUT also flows through the feedback resistance, causing a large DC component.Higher values of resistance RR will result in higher gain, but also in a reduced maximum bias current for the DUT.It will be assumed, In this system, the device under test (Z D ) is biased at a constant voltage V B because of the virtual short circuit between the inputs of the operational amplifier OA 1 .The transimpedance gain between the DUT current source i D and the output voltage of OA 1 is set by the resistor R R .In choosing the value of R R , the linearity range of OA 1 must be taken into account, since the DC current flowing through the DUT also flows through the feedback resistance, causing a large DC component.Higher values of resistance R R will result in higher gain, but also in a reduced maximum bias current for the DUT.It will be assumed, however, that the gain of the first stage is sufficient to ensure a negligible influence of the noise contributions from the following stages to the overall noise of the system.The high-pass filter C 1 R 1 rejects the DC component at the output of OA 1 so that only the noise signal can be further amplified by the voltage amplifier.The value of A v ensures that the signal level at its output is compatible with the input range of the spectrum analyzer used for spectral estimation.The relevant noise sources that contribute to the background noise (BN) are also shown in Figure 1.For the amplifier to be effective, we need the noise contribution of these sources to be negligible with respect to the noise generated by the DUT.In the measurement bandwidth (determined by the corner frequency of the high-pass filter C 1 R 1 and the bandwidth limit of the transresistance and the voltage amplifier stages), the Power Spectral Density (PSD) S OUT recorded by the spectrum analyzer is: with where S in , S en , and S eB are the PSDs of the noise sources i n , e n and e B , k is the Boltzmann constant, and T is the absolute temperature.To obtain the expression of the background noise S iBN , as shown in Equation ( 3), we assumed all noise sources to be uncorrelated.When dealing with high DUT impedances, relatively high values of S en and S eB can be tolerated, and this means that Junction Field-Effect Transistor (JFET) or Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET) input operational amplifiers can be used to minimize the contribution from the current noise S in .Indeed, in many cases of interest, the main contribution to the background noise comes from the feedback resistance R R , which must be chosen to be as large as possible, in a manner compatible with the limitations mentioned above as well as the desired bandwidth of the system [20,21].However, for DUT impedances on the order of 1 kΩ or less, the contribution from S en and S eB can become relevant and cannot be neglected any longer.To simplify this discussion, we limited our analyses to the low frequency range, in which the flicker noise component can be more easily detected.Moreover, at low frequencies, we assume the DUT impedance to behave as a resistance, that is Z D ≈ R D .For this reason, the DUT model does not contain any reactive component.
Flicker noise can only be measured when it is greater than the thermal noise of the DUT.Therefore, the thermal noise of the DUT represents a reference level with respect to which the background noise of the amplifier must be minimized.Minimizing the ratio between the DUT thermal noise S D and the background noise means minimizing the quantity Q n : From Equation (4), we conclude that R D /R R << 1 is a necessary condition to obtain S iBN /S D << 1.If this condition is satisfied, we also have R D R R ≈ R D and Equation (4) can be rewritten as: If we resort to either batteries or ultra-low-noise voltage sources for obtaining the required bias for the DUT, S eB can be made negligible [22].From Equation (5), if S eB ≈ 0, the lowest value of Q n , for given values of S in and S en , is obtained when Note, however, that R Qmin is frequency dependent, and that the value of R D is set by the DUT and cannot be easily modified.
To further simplify the discussion, we will assume a "typical" value of R D = 1 kΩ with a thermal voltage noise of 4kTR D ≈ 16.6 × 10 −18 V 2 /Hz (≈ 4nV / √ Hz) at room temperature.Let us first explore the possibility of using a monolithic low-noise operational amplifier with BJT (Bipolar Junction Transistor) or FET input stage technology.With BJT input stages, lower levels of input voltage noise can be achieved.On the other hand, the current noise obtained when using BJTs is typically six orders of magnitude higher than that obtained using MOSFET input amplifiers.As an example, the voltage noise of the MOSFET input TLC2201 operational amplifier at 1 Hz is 3.6 × 10 −15 V 2 /Hz (60 nV/ √ Hz), with a specified current noise of less than 10 −30 A 2 /Hz (1 fA/ √ Hz).With our assumed reference R D , the contribution of S in to Q n is completely negligible (less than 10 −8 ), but due to the value of S en , Q n is about 225.For the low-noise BJT input OP27, the voltage noise is 100 times lower (36 × 10 −18 V 2 /Hz) than that of the TLC2201, resulting in a contribution to Q n of about 2.25.On the other hand, the effect of the current noise is not negligible as before: at the same frequency (1 Hz), its value is about 22.4 × 10 −24 A 2 /Hz (≈ 5 pA/ √ Hz), and this results in a further contribution of about 1.6 to Q n , for a total value of Q n close to 4. Note, moreover, that the reference value for our chosen resistance is close to the value that minimizes Q n at 1 Hz for the OP27 (R Qmin = 1270 Ω).This means that for impedance values significantly below or above 1 kΩ, Q n becomes significantly greater than 4.
The results discussed above are quite typical and do not change significantly across the range of commercially available monolithic low-noise operational amplifiers.To obtain a value of Q n that is significantly below 1, we must therefore resort to a custom design for the operational amplifier OA1.In particular, we can take advantage of the noise characteristics of discrete component devices (BJTs or JFETs), allowing us to obtain significantly lower levels of equivalent voltage noise.The reduction in voltage noise, however, is accompanied by an increase in current noise.This means that unless we are dealing with very low impedances (well below 100 Ω), there is no advantage in using discrete BJTs as front-end devices.On the other hand, discrete low-noise JFET devices can make it possible to reach sufficiently low equivalent input voltage noise, with the contribution of the current noise to Q n remaining negligible.
On the basis of the above considerations, we resorted to very-low-noise discrete JFETs in order to obtain a super-operational amplifier characterized by equivalent input voltage and current noise levels that are sufficiently low as to allow the design of a TIA capable of enabling the reliable measurement of the low-frequency current noise in devices with equivalent impedances in the range of 1 kΩ or below.
Materials and Methods
The schematic of the proposed ultra-low-noise TIA amplifier is presented in Figure 2. The very-low-noise discrete JFET pair IF3602 is used to provide an ultra-low-noise input stage in front of the low-noise BJT input operational amplifier OPA227.The transistor Q 1 , together with the resistance R SS , behaves as a current source for biasing the JFET pair (J 1 and J 2 ).The collector current I C1 of Q 1 , with V DD = V SS = 12 V, is approximately: where V BEON is the voltage drop between the base and the emitter of Q 1 in the active region (in the range from about 0.6 to 0.7 V).At rest and under ideal conditions, the gate voltages V G1 and V G2 of J 1 and J 2 are at zero potential, so that the two JFETs operate with the same current: I D1 = I D2 = I C1 /2 = 3.5 mA.The voltage drop across the drain resistances R D1 and R D2 is, therefore, 7 V, so that the drain-to-gate voltages V DG1 and V DG2 are maintained at about 5 V, ensuring operation in the active region (the typical pinch-off voltage for the IF3602 is −350 mV).The values of the bias currents for J 1 and J 2 are the result of a compromise between the need for low noise (equivalent input noise decreases when increasing bias) and the need to limit the power dissipated by the active devices in order to limit the convective motion of the air close to the JFET, which can induce large fluctuations at low frequencies.
Table 1.Component list for the circuit in Figure 2. the need to limit the power dissipated by the active devices in order to limit the convective motion of the air close to the JFET, which can induce large fluctuations at low frequencies.
Proposed TIA amplifier.The component types and their values are listed in Table 1.
With the selected bias, the total power dissipated by the IF3602 is 35 mW, which is a small fraction of the maximum allowed power dissipation for the device (300 mW).At the same time, the transimpedance gain gm of each JFET can be expected to be about 70 mA/V [14], and, because RD1 = RD2 = RD= 2 kΩ, the differential voltage gain AVDJ of the JFET differential stage at low frequencies can be estimated to be: The cascade of the discrete JFET input stage with the OPA227 operational amplifier results in a Super Operational Amplifier (SOA in Figure 2), where the nodes G1 and G2 represent the non-inverting and inverting inputs, respectively.From Equation ( 8) and the fact that the DC gain of the OPA227 is about 160 dB, the DC gain of the SOA is above 200 1.
With the selected bias, the total power dissipated by the IF3602 is 35 mW, which is a small fraction of the maximum allowed power dissipation for the device (300 mW).At the same time, the transimpedance gain g m of each JFET can be expected to be about 70 mA/V [14], and, because R D1 = R D2 = R D = 2 kΩ, the differential voltage gain A VDJ of the JFET differential stage at low frequencies can be estimated to be: The cascade of the discrete JFET input stage with the OPA227 operational amplifier results in a Super Operational Amplifier (SOA in Figure 2), where the nodes G 1 and G 2 represent the non-inverting and inverting inputs, respectively.From Equation ( 8) and the fact that the DC gain of the OPA227 is about 160 dB, the DC gain of the SOA is above 200 dB, and therefore the internal compensation of the OPA227 is not sufficient to ensure stability for the entire amplifier.To address this issue, we resort to the compensation network made of R C1 , R C2 , C C1 , and C C2 in Figure 2. The OPA227 introduces a pole at about 3 Hz as part of its frequency response.The compensation network introduces two poles and two zeroes, which reduce the open-loop gain to zero dB before the high-frequency poles of the OPA227 are able to introduce a further phase shift that would cause instability.Because of the 90 • phase shift introduced by the dominant pole of the OPA227, the compensation network is designed in such a way that its phase contribution is less than 45 • at any frequency.This can be achieved if the frequency of the zero is no larger than 10 times that of the pole.With this constraint, the gain amplitude reduction that is obtained with a single zero-pole compensation network is insufficient to reach the desired goal.For this reason, in our design, we introduce two RC networks between the drain of the JFETs.With the values for R C1 C C1 and R C2 C C2 listed in Table 1, the pole and zero frequencies are: With these values, the SPICE simulations show that the open-loop gain reaches 0 dB with a phase margin of about 45 • .This ensures that the shunt-shunt configuration possesses the stability required to produce a transimpedance amplifier for resistive DUTs of any value.
The super operational amplifier is used in a shunt-shunt configuration, as shown in Figure 1, in order to obtain a transimpedance stage with gain A R set by the feedback resistance R R , that is: Assuming a negligible input offset for the SOA, Equation (10) holds for both the DC component and the fluctuations across the DUT.This means that the DC voltage at the output V O1 can be used to estimate the DC current through the DUT.It is for this reason that this voltage is carried by a buffer (OA 3 ) to one of the outputs of the system (V ODC ).
The amplitude of the noise fluctuations at node V O1 is typically too low to be effectively detected using a spectrum analyzer, and therefore, a second stage is used to obtain high voltage gain (O A2 ) after rejecting the DC component using an AC coupling filter (C A2 R A2 ).
Since this amplifier is intended for noise measurements on low-impedance DUTs, typical bias voltages V B across the DUT are often well below 1 V, and this means that the input offset of the SOA must be maintained low both for the voltage across the DUT to essentially coincide with the external bias voltage V B , and for the output V ODC to provide the correct value for the DC current through the DUT.With a discrete JFET input stage for the SOA, the offset can be relevant.This offset is essentially due to the mismatch between the two JFETs in the IF3602 device.Offsets as large as ±50 mV can easily be experienced in the case of the IF3602 [14], and these values are quite relevant for bias voltages on the order of a few hundred mV.Therefore, the schematic diagram in Figure 2 includes a system for adding a DC voltage at the non-inverting input of the SOA, which is obtained by exploiting a trimmer (R T ) together with a voltage divider (R O , R A1 ) and a capacitor C A1 in order to filter out, as much as possible, the thermal noise generated by the offset correction circuit itself.
With the approach shown in Figure 2, equivalent offsets below 5 mV were routinely obtained while testing the realized prototype.
The component list used to build the proposed amplifier is presented in Table 1.
The most important parameter for any amplifier designed for low-frequency noise measurements is the level of background noise, particularly at low frequencies, where the flicker noise of the active devices in the amplifier may mask the flicker noise generated by the DUT.Since we are mainly interested in the performances at low frequencies, and since it can be easily proven that the compensation network does not contribute significantly to the background noise at frequencies up to a few kHz, in discussing the noise performances of the system, we will neglect the presence of R C1 , R C2 , C C1 and C C2 .
The equivalent circuit for noise calculation is shown in Figure 3.The ratio between useful noise (the noise coming from the DUT Z D ) and background noise will be estimated with reference to the input of the last voltage amplifier in the noise measurement chain (v OB , corresponding to the non-inverting input of OA 2 in Figure 2).With the approach shown in Figure 2, equivalent offsets below 5 mV were rou obtained while testing the realized prototype.
The component list used to build the proposed amplifier is presented in Table The most important parameter for any amplifier designed for low-frequency measurements is the level of background noise, particularly at low frequencies, whe flicker noise of the active devices in the amplifier may mask the flicker noise genera the DUT.Since we are mainly interested in the performances at low frequencies, and it can be easily proven that the compensation network does not contribute significa the background noise at frequencies up to a few kHz, in discussing the noise perform of the system, we will neglect the presence of RC1, RC2, CC1 and CC2.
The equivalent circuit for noise calculation is shown in Figure 3.The ratio be useful noise (the noise coming from the DUT ZD) and background noise will be esti with reference to the input of the last voltage amplifier in the noise measurement (vOB, corresponding to the non-inverting input of OA2 in Figure 2).Besides omitting the compensation network, other simplifications are made: BJT-based current source is assumed to be characterized by very high equivalent im ance; (b) the current noise source representing the noise coming from the BJT-based is omitted, since it is a common mode source whose effects are rejected by OA2; equivalent input current noise source at the input of OA3 is omitted, since it is shor the very low impedance at the output of OA2.Because of the high open-loop gain SOA and the fact that we are mainly interested in the noise at low frequencies, w perform noise estimation under the assumption of a virtual short circuit between verting G2(−) and non-inverting G1(+) inputs of the SOA.We will also assume that w working above the cut-off frequency of the AC coupling filter, with the minimu quency of interest being 1 Hz.Note, however, that the capacitors CA1 and CA2 are n placed with short circuits, because, due to the large amount of thermal noise intro by the resistances ROE and RA2, their finite impedance may result in a non-negligibl tribution to the BN of the system even above the cut-in frequency [23].
We will also take advantage of the reasonable assumption that all noise sourc uncorrelated, so that we can estimate the total noise at vOB by adding the noise contri from each single source.
We can start by estimating the contribution SOB_iD to the PSD of the noise at v to the DUT noise source iD with PSD SiD.We have: It can be easily demonstrated that, assuming the PSD SJ1 of ej1 and Sj2 of ej2 to same, and equal to SJ, their total contribution SOB_J is given by: Besides omitting the compensation network, other simplifications are made: (a) the BJT-based current source is assumed to be characterized by very high equivalent impedance; (b) the current noise source representing the noise coming from the BJT-based circuit is omitted, since it is a common mode source whose effects are rejected by O A2 ; (c) the equivalent input current noise source at the input of OA 3 is omitted, since it is shorted by the very low impedance at the output of OA 2 .Because of the high open-loop gain of the SOA and the fact that we are mainly interested in the noise at low frequencies, we can perform noise estimation under the assumption of a virtual short circuit between the inverting G 2 (−) and non-inverting G 1 (+) inputs of the SOA.We will also assume that we are working above the cut-off frequency of the AC coupling filter, with the minimum frequency of interest being 1 Hz.Note, however, that the capacitors C A1 and C A2 are not replaced with short circuits, because, due to the large amount of thermal noise introduced by the resistances R OE and R A2 , their finite impedance may result in a non-negligible contribution to the BN of the system even above the cut-in frequency [23].
We will also take advantage of the reasonable assumption that all noise sources are uncorrelated, so that we can estimate the total noise at v OB by adding the noise contribution from each single source.
We can start by estimating the contribution S OB_iD to the PSD of the noise at v OB due to the DUT noise source i D with PSD S iD .We have: It can be easily demonstrated that, assuming the PSD S J1 of e j1 and S j2 of e j2 to be the same, and equal to S J , their total contribution S OB_J is given by: As far as the contribution of the offset compensation network is concerned, with the component values in Table 1, the equivalent resistance R OE is essentially reduced to R O R A1 , which, at the minimum frequency of interest (1 Hz), is much greater than the impedance X CA1 of the capacitor C A1 .Therefore, the contribution S OB_OE due to the thermal noise i OE of the resistor R OE is reduced to: where f is the frequency, k is the Boltzmann constant, and T is the absolute temperature.
With respect to the contribution from the thermal noise of the resistances R D1 and R D2 and the equivalent input voltage (e O1 ) and current noise sources (i 1A and i 1B ), assuming a virtual short circuit at the inputs of the SOA, there is no contribution to the output noise.This result, however, is only an approximation, and depends on the magnitude of the loop gain and the magnitude of the PSD associated with the noise sources.Proper calculations show that these contributions can be neglected at low frequencies as long as the loop gain of the SOA in the shunt-shunt configuration in Figure 2 is high and g m R D >> 1.
In terms of the contribution S OB_RR of the thermal noise generated by the feedback resistance R R , it is given by: Since we are assuming that we are operating well above the cut-in frequency of the AC filter R A2 C A2 , the reactance of the capacitor C A2 is much lower than the resistance R A2 , and therefore, for the contributions S OB_A2 of the resistance R A2 and S OB_OI2 of the noise source i I2 , we have: where S II2 is the PSD of the current noise source i I2 .Finally, it is necessary to add the contribution S OB_EI2 due to the equivalent noise source e I2 , that is: where S eI2 is the PSD of the current noise source e I2 .
In order to more easily understand the relative weight of the different noise contributions, we can proceed with the same simplifying assumption used in the previous section to obtain the equivalent input background noise, that is, the DUT impedance is assumed to be essentially a resistance with a value R D , and the condition R D /R R << 1 is satisfied.
Starting from the PSD S OB of the overall noise at the output v OB in the form: we can calculate the parameter Q n as before.We have: In Equation ( 18), Q n is expressed as the sum of two contributions to stress the fact that the contribution Q n2 can be made as small as desired by increasing the values of the coupling capacitors C A1 and C A2 , at the cost, however, of increasing the time constants τ A1 and τ A2 ; the fact that these time constants increase means that the settling time of the circuit increases as well, and this, besides resulting in a waste of time when connecting a new DUT or setting a different bias voltage, can make offset correction a much more challenging task.When looking at the relative weight of the terms contributing to Q n2 , we can start by evaluating the term that contains the PSD S II2 .The ADA4625 is a JFET input operational amplifier.The PSD for the equivalent input current noise source reported in the datasheet is 4.5 fA/ √ Hz, and this means that the contribution of the fraction containing S II2 in Equation ( 18) is much lower than 1 for values of R A2 on the order of a MΩ or more (R A2 = 1 MΩ in our prototype).This, together with the fact that the two time constants τ A1 and τ A2 have similar values, and the fact that: clearly indicates that Q n2 is essentially set by the first term in Equation (18), that is, by the thermal noise generated by the equivalent resistance R OE that is not completely filtered out by the capacitance C A1 .At the minimum frequency of interest (1 Hz), assuming a typical value for R D of 1 kΩ, we have Q n2 ≈ 0.06.Note that, because of the proportionality to the inverse of the frequency squared, Q n2 rapidly decreases with increasing frequency.
In the case of Q n1 , too, the highest value is obtained at the lowest frequency of interest because of the flicker noise component introduced by the JFETs and by the operational amplifier OA 2 .At the minimum frequency of interest (1 Hz), assuming a typical value of R D of 1 kΩ and a worst-case scenario in which R R is limited to 10 kΩ (R R /R D = 10), with S J = 1 × 10 −18 V 2 /Hz [14] and S eI2 = 1 × 10 −16 V 2 /Hz, we obtain Q n2 ≈ 0.28.If the bias conditions are such that a feedback resistance of R R = 100 kΩ can be used, Q n2 is reduced to about 0.12, a value essentially set by the noise contribution coming from the JFETs.Overall, therefore, under the same conditions explored in the introduction (DUT resistance on the order of 1 kΩ), a value of Q n is obtained that is a small fraction of 1 (from 0.18 to 0.24, depending on the value of the feedback resistance).This is to be regarded as an excellent result in consideration of the fact that, when investigating the flicker noise, the thermal noise generated by the DUT can be regarded as part of the background noise of the system (i.e., the flicker noise must be much larger than the thermal noise for a reliable characterization).Therefore, obtaining a value of Q n that is a small fraction of 1 means that we are operating quite close to the ideal conditions under which no excess noise will be introduced by the amplifier.
The value of Q n obtained above is relative to the minimum frequency of interest (1 Hz).As the frequency increases, the background noise decreases because of the reduction of both the impedance of the coupling capacitances and the flicker noise contribution introduced by the active devices.
As can be deduced from Equation (18), if R D decreases significantly, Q n increases and the background noise of the system becomes relevant with respect to the thermal noise introduced by the DUT.
Results
We built the amplifier presented in Figure 1 to experimentally verify that sensible direct low-frequency current noise measurements on low-impedance devices are indeed possible.We tested the amplifier with two different values for the feedback resistance R R , i.e., 10 kΩ and 100 kΩ, as shown in Table 1.The test measurements were initially performed using known resistances as DUTs.In particular, we tested the system with both a 100 Ω resistor and a 1 kΩ resistor as the DUT.The 1 kΩ resistor was taken as representative of the typical impedance we expect with actual devices in noise measurements.The test using a 100 Ω resistor as a DUT was performed in order to more clearly evidence the noise contribution (background noise) introduced by the amplifier, since, as was shown in the previous section, its relative weight increases with decreasing DUT impedance.
As can be observed in Figure 4, when testing a 100 Ω resistance as a DUT, with R R = 10 kΩ, the background noise due to the amplifier had a noticeable effect on the measurement results, although it can be observed that even in this configuration, it should be possible to perform sensible flicker noise measurements within the frequency range in which the flicker noise is much greater than the thermal noise of the device.It can also be noticed that, as can be deduced from Equation (18), the background noise in this case is mostly due to the first term in Q n1 , since R D /R R = 10 −2 .In other words, the background noise is set by the input JFETs, and increasing the value of the feedback resistance has a negligible effect on the background noise.
As can be observed in Figure 4, when testing a 100 Ω resistance as a DUT, with RR = 10 kΩ, the background noise due to the amplifier had a noticeable effect on the measurement results, although it can be observed that even in this configuration, it should be possible to perform sensible flicker noise measurements within the frequency range in which the flicker noise is much greater than the thermal noise of the device.It can also be noticed that, as can be deduced from Equation (18), the background noise in this case is mostly due to the first term in Qn1, since RD/RR = 10 −2 .In other words, the background noise is set by the input JFETs, and increasing the value of the feedback resistance has a negligible effect on the background noise.
Test results when using resistances at room temperature as DUTs.Tests were performed on 100 Ω and 1 kΩ resistances using two different values for the feedback resistance RR.The continuous black lines represent the expected noise, that is, the thermal current noise generated when using the resistances as DUTs.
When testing a 1 kΩ resistance as the DUT, from Figure 4 it can be observed that the measured value of the equivalent input current noise was very close to the noise generated by the DUT, except for at very low frequencies, where the deviation is, however, very small.A careful examination of the two curves relative to RR = 10 kΩ (blue curve) and RR = 100 kΩ (red curve) indicates, as should be expected, that employing a higher feedback resistance is beneficial as far as the BN is concerned.On the other hand, no significant difference can be expected when even a moderate level of flicker noise is present, and therefore, in general terms, and unless the DUT impedance is considerably below 1 kΩ, there is no significant advantage to employing feedback resistances above 10 kΩ, as this will result in a limitation of the bias level that can be applied to the DUT.It can be observed from Figure 4 that the measured noise increases above 10 kHz.This can be explained by the fact that, because of the compensation network, the gain in the first stage decreases as the frequency increases, and the equivalent input noise increases because the relative weights of the noise introduced by RD1, RD2, and OA1 increase [14].
Following the preliminary tests on resistances, we performed noise measurements on advanced photodetectors to demonstrate the ability of the new amplifier to directly perform current noise measurements on low-impedance DUTs.
The photodetector used in this investigation was an InAsSb (p + Bpun + )-based barrier backside illuminated device.It was grown on a GaAs substrate with GaAs and InAs Si-Figure 4. Test results when using resistances at room temperature as DUTs.Tests were performed on 100 Ω and 1 kΩ resistances using two different values for the feedback resistance R R .The continuous black lines represent the expected noise, that is, the thermal current noise generated when using the resistances as DUTs.
When testing a 1 kΩ resistance as the DUT, from Figure 4 it can be observed that the measured value of the equivalent input current noise was very close to the noise generated by the DUT, except for at very low frequencies, where the deviation is, however, very small.A careful examination of the two curves relative to R R = 10 kΩ (blue curve) and R R = 100 kΩ (red curve) indicates, as should be expected, that employing a higher feedback resistance is beneficial as far as the BN is concerned.On the other hand, no significant difference can be expected when even a moderate level of flicker noise is present, and therefore, in general terms, and unless the DUT impedance is considerably below 1 kΩ, there is no significant advantage to employing feedback resistances above 10 kΩ, as this will result in a limitation of the bias level that can be applied to the DUT.It can be observed from Figure 4 that the measured noise increases above 10 kHz.This can be explained by the fact that, because of the compensation network, the gain in the first stage decreases as the frequency increases, and the equivalent input noise increases because the relative weights of the noise introduced by R D1 , R D2, and OA 1 increase [14].
Following the preliminary tests on resistances, we performed noise measurements on advanced photodetectors to demonstrate the ability of the new amplifier to directly perform current noise measurements on low-impedance DUTs.
The photodetector used in this investigation was an InAsSb (p + B p un + )-based barrier backside illuminated device.It was grown on a GaAs substrate with GaAs and InAs Si-doped layers using molecular beam epitaxy.The architecture details of the investigated structure are shown in Figure 5.It consisted of four main layers, which were additionally supplemented by gradient layers.The absorber, which is the main layer on which the radiation is absorbed, was non-intentionally doped (n.i.d.) with n-type conductivity.The bandgap barrier for electrons was made using AlAsSb.The n + contact placed at the bottom was made of a highly Si-doped InAs 1-x Sb x layer.To reduce tunneling currents and decrease the maximum electric field occurring on the junction the additional graded Si-doped InAs 1−x Sb x layer was sandwiched between the n+ contact and the absorber.
A Be-doped (p-type) AlSb barrier was used to cap the absorber layer.The Be-doped (8 × 10 18 cm −3 InAs 1-x Sb x contact layer was applied to the top of the structure.Thanks to this construction both dark current and noise were reduced.The ohmic contact to the structures was performed by etching followed by Au/Ti metallization.The overall structure with the contacts was closed inside a metal TO-8 package. Typically, the resistance area product (R 0 A) of commercially available InAs x Sb 1−x diodes varies from about 4 to 60 Ω cm 2 (with 0 ≤ x ≤ 0.36, T = 300 K) [24].The tested detector was optimized for the 5 µm wavelength and mounted on a thermoelectric cooler to make it possible to improve its overall performance by operating at lower temperatures.
In our experiments, we used a dedicated PID thermoelectric cooler controller to set up and precisely stabilize the temperature during measurements.Moreover, to dissipate heat from the "hot" side of the TEC, it was placed on a large-area aluminum radiator using thermoconductive paste.At zero bias voltage and A = 0.01 mm 2 , the R 0 A product of our sample at 300 K was about 7 mΩ•cm 2 .Further electro-optical details about the detector can be found in [25].
structure are shown in Figure 5.It consisted of four main layers, which were additionally supplemented by gradient layers.The absorber, which is the main layer on which the radiation is absorbed, was non-intentionally doped (n.i.d.) with n-type conductivity.The bandgap barrier for electrons was made using AlAsSb.The n + contact placed at the bottom was made of a highly Si-doped InAs1-xSbx layer.To reduce tunneling currents and decrease the maximum electric field occurring on the junction the additional graded Si-doped InAs1−xSbx layer was sandwiched between the n+ contact and the absorber.A Be-doped (ptype) AlSb barrier was used to cap the absorber layer.The Be-doped (8 × 10 18 cm −3 InAs1-xSbx contact layer was applied to the top of the structure.Thanks to this construction both dark current and noise were reduced.The ohmic contact to the structures was performed by etching followed by Au/Ti metallization.The overall structure with the contacts was closed inside a metal TO-8 package. Typically, the resistance area product (R0A) of commercially available InAsxSb1−x diodes varies from about 4 to 60 Ω cm 2 (with 0 ≤ x ≤ 0.36, T = 300 K) [24].The tested detector was optimized for the 5 µm wavelength and mounted on a thermoelectric cooler to make it possible to improve its overall performance by operating at lower temperatures.In our experiments, we used a dedicated PID thermoelectric cooler controller to set up and precisely stabilize the temperature during measurements.Moreover, to dissipate heat from the "hot" side of the TEC, it was placed on a large-area aluminum radiator using thermoconductive paste.At zero bias voltage and A = 0.01 mm 2 , the R0A product of our sample at 300 K was about 7 mΩ•cm 2 .Further electro-optical details about the detector can be found in [25].Figure 6 shows typical current-voltage characteristics of the investigated devices at different temperatures.These were measured using a precision SMU Keithley 236 using forcing voltage.While performing these measurements, the detector was covered to avoid any influence from background radiation coming from the environment.It was shown that for lower temperatures, the dependence of the current on reverse bias was much more visible than at higher ones.This confirms that the dynamic resistance of the investigated IR detector is a function of both the temperature and the bias.Figure 6 shows typical current-voltage characteristics of the investigated devices at different temperatures.These were measured using a precision SMU Keithley 236 using forcing voltage.While performing these measurements, the detector was covered to avoid any influence from background radiation coming from the environment.It was shown that for lower temperatures, the dependence of the current on reverse bias was much more visible than at higher ones.This confirms that the dynamic resistance of the investigated IR detector is a function of both the temperature and the bias.
For this reason, all tests, including noise measurements, were performed while actively keeping the devices at a given temperature.To ensure proper temperature stabilization using TEC, the noise measurements reported in this paper were performed at 280 K, which is a temperature below, but not too far away from, ambient temperature.At this temperature, despite the cooling (which usually increases resistance), the detector is still characterized by relatively low resistance.
The impedance of the device vs. frequency was measured at different bias voltages and different operating temperatures, as well.In all cases, the impedance of the device is essentially coincident with the differential resistance at DC up to frequencies as high as a few tens of kHz.The differential resistance vs. bias for the tested photodiode at an operating temperature of 280 K is reported in Figure 7.For this reason, all tests, including noise measurements, were performed while actively keeping the devices at a given temperature.To ensure proper temperature stabilization using TEC, the noise measurements reported in this paper were performed at 280 K, which is a temperature below, but not too far away from, ambient temperature.At this temperature, despite the cooling (which usually increases resistance), the detector is still characterized by relatively low resistance.
The impedance of the device vs. frequency was measured at different bias voltages and different operating temperatures, as well.In all cases, the impedance of the device is essentially coincident with the differential resistance at DC up to frequencies as high as a few tens of kHz.The differential resistance vs. bias for the tested photodiode at an operating temperature of 280 K is reported in Figure 7.For this reason, all tests, including noise measurements, were performed while actively keeping the devices at a given temperature.To ensure proper temperature stabilization using TEC, the noise measurements reported in this paper were performed at 280 K, which is a temperature below, but not too far away from, ambient temperature.At this temperature, despite the cooling (which usually increases resistance), the detector is still characterized by relatively low resistance.
The impedance of the device vs. frequency was measured at different bias voltages and different operating temperatures, as well.In all cases, the impedance of the device is essentially coincident with the differential resistance at DC up to frequencies as high as a few tens of kHz.The differential resistance vs. bias for the tested photodiode at an operating temperature of 280 K is reported in Figure 7.The noise characterization of the photodetector is performed in reverse bias.As can be noticed from Figure 7, in the case of the device to be measured, its differential resistance ranges from a few hundred ohms (for a reverse bias of a few tens of mV) to a maximum of about 20 kΩ, at about −250 mV.It can also be noticed that, due to the peculiar shape of the I-V curve, the device resistance decreases at biases below −250 mV.
The results of the noise measurements performed on the photodiode at 280 K and different bias levels are reported in Figure 8.The thermal noise corresponding to the resistance of the unbiased photodiode is also shown.
From Figure 8, it is apparent that, except for the lowest biases at higher frequencies, the situation is one in which the flicker noise generated by the photodiode can be detected and estimated.Increasing the voltage causes a significant increase in flicker noise S(f ) = 1/f α in situations in which it is dominant in the overall examined frequency range, where the α parameter has a value close to 1.The detailed behavior of the spectral noise in this type of photodetector has already been described in the literature [26], and it is outside the scope of this work.from a few hundred ohms (for a reverse bias of a few tens of mV) to a maximum of about 20 kΩ, at about −250 mV.It can also be noticed that, due to the peculiar shape of the I-V curve, the device resistance decreases at biases below −250 mV.
The results of the noise measurements performed on the photodiode at 280 K and different bias levels are reported in Figure 8.The thermal noise corresponding to the resistance of the unbiased photodiode is also shown.From Figure 8, it is apparent that, except for the lowest biases at higher frequencies, the situation is one in which the flicker noise generated by the photodiode can be detected and estimated.Increasing the voltage causes a significant increase in flicker noise S(f) = 1/f α in situations in which it is dominant in the overall examined frequency range, where the α parameter has a value close to 1.The detailed behavior of the spectral noise in this type of photodetector has already been described in the literature [26], and it is outside the scope of this work.
It is, however, important to remark that the availability of effective instrumentation, such as that discussed in this paper, can be extremely useful.To show what can be obtained if effective and easy-to-use instrumentation is available, we can, for instance, discuss what can be inferred from the dependence of the current noise density at f = 10 Hz on the bias current, as reported in Figure 9.It is, however, important to remark that the availability of effective instrumentation, such as that discussed in this paper, can be extremely useful.To show what can be obtained if effective and easy-to-use instrumentation is available, we can, for instance, discuss what can be inferred from the dependence of the current noise density at f = 10 Hz on the bias current, as reported in Figure 9.The line in the plot represents the trend for the proportionality of the flicker noise to the bias current squared.From the results reported in Figure 9, it can be observed that the proportionality of the flicker noise vs. the bias current squared can be assumed for low bias voltages (region A in the I-V characteristics reported in the inset); with increasing voltage (in reverse bias), region B in the I-V characteristics is explored, and, when region C is reached, the dependence of the noise on the bias current is differs significantly from the behavior at low voltages/currents.The shape of this noise characteristic results from the properties of the tested photodiode.The results of studies described in the literature show that the total 1/f noise power spectral density is a complex function of a few noise currents.In [27], this function is presented by the model formula: The line in the plot represents the trend for the proportionality of the flicker noise to the bias current squared.From the results reported in Figure 9, it can be observed that the proportionality of the flicker noise vs. the bias current squared can be assumed for low bias voltages (region A in the I-V characteristics reported in the inset); with increasing voltage (in reverse bias), region B in the I-V characteristics is explored, and, when region C is reached, the dependence of the noise on the bias current is differs significantly from the behavior at low voltages/currents.The shape of this noise characteristic results from the properties of the tested photodiode.The results of studies described in the literature show that the total 1/f noise power spectral density is a complex function of a few noise currents.In [27], this function is presented by the model formula: S ieq = α sh I 2 sh + α g−r I 2 g−r + α di f f I 2 di f f + α tun I tun (20) where I sh, I g−r , I diff and I tun are the shunt, generation-recombination, diffusion and tunneling dark current components, respectively, with the corresponding noise coefficients α sh , α g−r, α diff and α tun .Depending on the photodiode construction and its operating point (temperature and bias voltage), each of the above noise sources will affect the total noise differently (i.e., the noise coefficients will take different values).Based on the results described in [26][27][28], the inset region in Figure 9 shows that the diffusion and g − r current components predominate in the detector's dark current in the low-and mid-voltage ranges (A,B).Meanwhile, at high voltage bias (C), the tunneling current components predominates.However, there is no correlation between the total dark current and the measured 1/f noise PSD.This suggests that there are some current components other than diffusion that could have a higher influence on 1/f noise at different levels of bias voltage.
In the low-voltage range (A), the quadrature dependence is caused by the ohmic-like behavior of the tested device, and measured noise can be regarded as resistance fluctuation noise from detector shunt resistance.
Along with increasing the voltage (B), the 1/f noise components come from shunt, g − r, and tunneling currents.At the highest voltages (C), the overall noise is dominated by tunneling current.
Conclusions
This paper presented the analysis, construction, and verification of an ultra-lownoise transimpedance amplifier.It was dedicated to investigating low-frequency noise in photodetectors, which are characterized by relatively low resistance, on the order of or below 1 kΩ.As discussed in this paper, a low-impedance device connected to the input of a transimpedance amplifier results in a large amount of background noise due to the equivalent input voltage noise source represented by the operational amplifier used in typical shunt-shunt feedback configurations.This problem was addressed and solved by designing a transimpedance amplifier based on a discrete device input stagebased operational amplifier, characterized by very low input voltage noise, on the order of 4 nV/ √ Hz, 2 nV/ √ Hz, and 0.9 nV/ √ Hz at 1 Hz, 10 Hz, and 1 kHz, respectively.The results were obtained by resorting to a low-noise IF3602 differential pair n-channel JFET transistor as a discrete input stage for the operational amplifier, which was used in a shunt-shunt configuration, in order to obtain a transresistance stage.The resulting background noise was investigated and verified using known-value resistors as DUTs.After preliminary characterization, the amplifier was used to study the current noise in a low-resistance InAsSb barrier detector, demonstrating that direct and reliable measurement of the current noise in low-impedance devices is indeed possible with the instrumentation designed and tested here.
Figure 2 .
Figure 2. Proposed TIA amplifier.The component types and their values are listed in Table1.
Figure 3 .
Figure 3. Simplified equivalent circuit for noise calculation.
Figure 3 .
Figure 3. Simplified equivalent circuit for noise calculation.
Figure 6 .
Figure 6.Current-voltage (I-V) characteristics of the tested photodiode at different temperatures.
Figure 7 .
Figure 7.The determined differential resistance of the photodiode at a temperature of 280 K.
Figure 6 .Figure 6 .
Figure 6.Current-voltage (I-V) characteristics of the tested photodiode at different temperatures.
Figure 7 .
Figure 7.The determined differential resistance of the photodiode at a temperature of 280 K.Figure 7. The determined differential resistance of the photodiode at a temperature of 280 K.
Figure 7 .
Figure 7.The determined differential resistance of the photodiode at a temperature of 280 K.Figure 7. The determined differential resistance of the photodiode at a temperature of 280 K.
Figure 8 .
Figure 8.Current noise measurement results at different bias levels.
Figure 8 .
Figure 8.Current noise measurement results at different bias levels.
Figure 9 .
Figure 9. Current noise at 10 Hz vs. bias current.The I-V characteristics of the device at 280 K from Figure 5 are reported in the inset.
Figure 9 .
Figure 9. Current noise at 10 Hz vs. bias current.The I-V characteristics of the device at 280 K from Figure 5 are reported in the inset.
is considered. | 13,328 | 2023-09-04T00:00:00.000 | [
"Physics",
"Engineering"
] |
Presentation of Design Equations for Array of Circumferential Slot on Cylindrical Waveguide
In this paper the design equations for an array of circumferential slots on a cylindrical waveguide are obtained, following the procedure introduced by Elliott for slots on rectangular waveguides. The minimization of the error function is achieved for optimization of slot array parameters. The optimization of slot parameters is not goal of this paper but a numerical example are presented as illustrations of the proposed synthesis method. The results of array designs by the method of the least squares are verified by two computer simulation softwares, namely CST and HFSS.
Introduction
Train communication systems have received a considerable interest over the years, with tunnel connectivity providing an ongoing challenge due to the hostile environmental characteristics.In recent times, security aspects have come to the forefront with high-definition closed-circuit television monitoring being considered, together with possible remote train control and passenger emergency assistance networks [1] [2] [3] [4] [5].The antenna applied for the broadcasting station of the ultra high frequency television (UHF TV) requires either unidirectional or omnidirectional beam with sufficient gain and high power handling [6].
The first study of radiation from an aperture on an infinite metallic plane was reported by Silver and Saunders in 1950, who derived a formula for the generated external field [7].Bailin derived formulas for the radiation from axial and Open Journal of Antennas and Propagation circumferential rectangular slots on a conducting circular cylinder in 1955 [8] and compared his results with measurement data.Golden, et al. investigated some approximate techniques for the determination of mutual couplings among slots on cylindrical surfaces in 1974 [9].We follow the general method introduced by Elliott [10] for the evaluation of scattering from an aperture on the surface of a cylindrical waveguide, which is believed to be unprecedented for the circumferential slots in a circular cylindrical surface.Consequently, our main task is to derive two design equations which is done by assuming that the radius of the cylindrical surface is large, providing the possibility of assuming the slots to be located on a flat ground plane.This assumption may lead to some design approximations, which may then be rectified by a full-wave simulation by available computer softwares [11]- [14].
The TM 01 mode is assumed in the cylindrical waveguide, where the electric field is radial in its cross-section.Consequently, the radiation from the circumferential slots on its surface is omni-directional and independent of the azimuthal angle, which is desired for cylindrical slot arrays.
First Design Equation
The configuration of circumferential rectangular slots on a cylindrical waveguide is shown in Figure 1, together with the related dimensions which the thickness parameter is the thickness of wall of waveguide, R is radius of waveguide, Rα is length of slot, d is spacing between slots, W is width of slot and 0 ϕ is the an- gle offset of slot on circumference of cylindrical waveguide.
The backward (B) and forward (C) scattered field amplitudes are given by [10]: where subscripts B and C represent the amplitudes of backward and forward waves, t indicates the tangential field in the cross-section, S indicates the crosssection of uniform waveguide, and "slot" shows the slot surface area.
The field components of TM 01 mode in the cylindrical waveguide are: where 01 β is the phase constant and is cut-off number with a being the radius of waveguide.
The tangential electric field on the n'th aperture is: , 0 where n α is the angle of n'th slot.The other field component are: These field components are substituted in Equations ( 1) and ( 2) to obtain: Observe that the forward and backward traveling wave amplitudes are equal.Therefore the transmission line equivalent circuit consists of a parallel admittance.
The first design equation is then derived.The reflected power from the aperture is: where Open Journal of Antennas and Propagation The equality of reflected powers due to the scattered fields and the transmission line leads to the following relation: However, the scattered voltage amplitudes are [7]: The amplitudes B and B 01 from Equations ( 19) and ( 13) are then substituted in (18) to obtain the first design equation: where the Bessel functions 0 J , 1 J and 2 J are calculated for ha = 2.405 for TM 01 mode.
Second Design Equation
For the derivation of the second design equation, the procedure described by Elliott ( [10], pp: 402-407]) is followed, which for the circumferential slots on cylindrical waveguides gives: where ( ) ( ) ( ) ( ) is the intrinsic Open Journal of Antennas and Propagation impedance of medium and 0 G is characteristics admittance of cylindrical waveguide.Equation ( 21) can be written as is the active admittance of equivalent dipole that is assumed in the derivation of second design equation.we have where: nn Z : Self impedance of circumferential slot, which is equal to Z : Mutual impedance between circumferential slots on the cylindrical waveguide, which is equal to Z is mutual impedance between two dipole which may be obtained from the mutual admittance between two slots s nm Y by the Booker's relation.
The second design equation is then determined by these relations.
Design of a Linear Traveling Wave Slot Array
Consider the equivalent circuit of the linear traveling wave slot array as shown in Figure 2. The normalized admittance at the n'th slot looking towards the match port is [10]: where the second design Equation ( 21) is used.
The mode voltages at successive junctions are then related by: which may be written for . This ratio may also be obtained by Equation ( ) ( ) This expression is appropriate for the construction of an error function.
Construction of Error Function
The error function consists of three terms: Error Function Matching DesignEqs. where The error function depends on the slot spacings and angular dimensions and will be used for optimizing slot parameters by minimizer algorithms such as gradient conjugate or genetic algorithm.
Modified Taylor Pattern at 5.35 GHz
The cylindrical slot array is designed for 13 slots at the frequency 5.35 GHz.The design parameters of the array are given in Table 1.The pattern of slot array as obtained by the MLS and computer simulations by CST and HFSS are in Figure 3, for comparison.The VSWRs of array at the input port of cylindrical waveguide are drawn in Figure 4.
Conclusion
In this paper the design equations are developed for the traveling wave mode by
Figure 1 .
Figure 1.An array of circumferential slots on a circular cylindrical surface.
which give the reflected power:
Figure 2 .
Figure 2. Equivalent circuit of the linear traveling wave slot array.
Figure 3 .
Figure 3.Comparison of patterns by MLS, HFSS and CST at 5.35 GHz.
Figure 4 .
Figure 4. Diagram of VSWR Simulated by HFSS and CST softwares.
with SLL = −13 dB Characteristics of desired pattern employing the equivalent circuits according to the Elliott's method.The geometrical dimensions of the slot array on the cylindrical surface are determined by the minimization of the appropriate error functions.The proposed synthesis method of cylindrical slot array is demonstrated by one design example at 5.35 GHz frequency and is verified by simulation softwares of CST and HFSS.Such arrays are appropriate for various platforms of cylindrical shape, such broadcasting transmitter antennas (TV station).
Table 1 .
The parameters and specifications of cylindrical slot array. | 1,712.2 | 2017-12-08T00:00:00.000 | [
"Engineering",
"Physics"
] |
Beneficiation of Low-grade Phosphate Deposits by a Combination of Calcination and Shaking Tables: Southwest Iran
Three quarters of the world's phosphate deposits are of sedimentary origin and 75%–80% of those include carbonate gangue. In this study, carbonate sedimentary phosphate deposits of the Lar Mountains of southwest Iran are studied. These deposits consist mainly of calcite, fluorapatite, quartz, kaolinite and illite, with an average P2O5 grade of 9%–10% (low-grade). Various pre-processing and processing methods have been developed for concentrating low-grade phosphate up to marketable grade and this study aims to select the optimal method to produce an economically viable grade of phosphate concentrate from low-grade ore. Different concentration methods, including calcination and gravity separation, were applied on samples at both a laboratory and semi-industrial scale (pilot scale). Using an integrated method of calcination (performed in a rotary kiln) and shaking table for concentrating the low-grade phosphate ore, the results show promise at producing grades of 30.77% P2O5 with 60.7%–63.2% recovery.
Phosphate Production
Phosphate plays a significant economic role in developing countries because of the increasing demand on phosphate rock for fertilizer production and its importance in animal feed stocks, as well as food-grade phosphates and other industrial uses.The high demand for phosphate is typically fulfilled through mining and processing [1], an industry which, globally, produced 224 million tons in 2013 and is expected to reach 260 million tons in 2017 [2].About 75% of the world's phosphate rocks are of sedimentary origin and 75%-80% of those contain carbonate gangue [3].To meet the needs of the agricultural sector to produce phosphate and chemical fertilizers, several methods have been proposed for mining based on the characteristics and depth of phosphate ore [4].Similarly, to concentrate low-grade phosphate ore to a marketable grade (~30% P2O5) [5] several pre-processing and processing methods are defined [1,6,7].These are based on the ore type, associated gangue minerals and the amount of impurities, as well as factors such as the degree of liberation of apatite minerals, the cost of the beneficiation method [1,6].The methods employed include gravity separation [6], magnetic separation [8][9][10], electrostatic separation [11][12][13][14], size reduction, screening [15], attrition, scrubbing, classification [7], heavy media separation [16], calcination [17], acidic leaching [18][19][20], direct flotation [21][22][23][24], reverse flotation [25,26] or the use of multiple methods.Sedimentary rocks have various chemical and mineralogical compositions in the gangue phase [16] and, therefore, based on the major associated minerals, the Lar phosphate deposit of southwest Iran is classified as a calcareous ore of sedimentary origin.In terms of processing, conventional techniques such as flotation and physical separation are difficult to remove the carbonate minerals from such ores (because of the similarity in physical properties of carbonates and phosphates) [6] and Calcination is another solution for upgrading these difficult-to-treat types of ores.Beneficiation by calcination is one of the better known processes which has been proposed in the past to treat carbonate-bearing sedimentary phosphate ores [27].Calcination can be an effective mineral concentration method, and is used in the processing and production of more than 10% of global phosphate sales [28].Calcination involves the thermal decomposition of carbonates and burning intended organic materials but has several disadvantages [11,29], such as high energy costs [5], low reactivity of final products and the high initial capital cost of calcination plants [6].
The thermal dissociation of carbonates is an endothermic reaction (Equation 1) with a significant energy requirement.
Furthermore, calcination produces low reactivity of the resulting phosphate [4] as well as a lower ratio of CaO/P2O5 compared with raw francolite [30].It should be noted that despite the disadvantages, the calcined phosphate product is used as raw material in the production of phosphoric acid and chemical fertilizers [31].
Using gravity methods after calcination is an effective process to increase the concentrate grade and leads to a more economical design.There are numerous advantages to gravity separation: low startup cost, low energy consumption during the crushing process, high efficiency, lack of environmental impacts and high selectivity compared with other methods, such as flotation [32].The gravity separation method is not applicable to all mineral compounds and requires the determination of a concentration criterion (CC) based on the relationship between the specific gravity (SG) of heavy and light minerals and the fluid (water or air) in which they are found [33], as follows: Concentration Criterion = Mineralogical studies have shown that calcite, quartz, apatite (collophane) and glauconite with specific gravities of 2.7, 2.65, 3.2 and 2.4 g/cm 3 , respectively, are the primary minerals of the phosphate ore.Consequently, the CC, based on the equation of Taggart (1951) [33] is 1.29, a value which indicates that gravity separation will not be commercially viable and cannot be used as the only pre-concentration method.Because carbonate is very finely dispersed in phosphate, gravity separation is also ineffective [16] and the similarity in specific gravities of dolomite, calcite and apatite also render the technique ineffective as the only mineral concentration process.Dependence of the foregoing phenomena it can be stated that this method should be applied together with calcination as an effective process to concentrate the low-grade phosphate ore.In this study the degree of liberation was considered the fundamental selection criterion for screening using the shaking table and, hence, parameters such as deck slope, feed water flow rate and dressing water flow rate were optimized to increase the efficiency of process.
Of the gravitational methods, the use of shaking tables (such as the Wilfley table) is considered an efficient method for ore concentration.In this method, a solid-liquid separation process is based on cross flow of light and heavy particles on an inclined, riffled table as the particles simultaneously spread out.As the table shakes the differential motion, riffled deck and cross flowing water causes particle separation (Figure 1), with the riffles helping to transmit the shaking motion to the particles as well as preventing direct washing of particles off the table.The vibration is asymmetrical, being slow in the forward direction and quick in the reverse direction.The particle feed enters at the corner of the table at a concentration of 25% solids, along with dressing water introduced from the upper edge to aid separation and displacement across the table.Finally, the particles move diagonally across the deck in accordance with their specific gravity where they can be variously collected.
Background of the Study
As 75% of the world's phosphate carbonate reserves are of sedimentary origin [3], much research has been done into the use of calcination to reduce carbonate gangue.In each of the cases described in this section, new methods involving calcination were used to concentrate phosphate carbonate ore.Kaljuvee et al. (1995) [34] examined phosphate samples from Kazakhstan with an initial P2O5 grade of 21%-23% and CO2 content of 8%-10% CO2.This is a carbonate ore containing primarily fluorapatite, quartz, dolomite and minor amounts of pyrite and calcite.The similarity in mineralogical composition in the region meant that common concentration methods (flotation, wet separation) were not effective.An integrated method of fluidized-bed calcination at a temperature of 900 °C and air separation for the concentration of the intended sample was used.The final P2O5 grade of the concentrate reached 28% with recovery at more than 85%.
Sinirkaya et al. (2011) [35] examined the changes of P2O5 in samples of phosphate rock from Turkey during simultaneous sulfidation-calcination.The ore had an initial P2O5 grade of 23%-27% and consisted primarily of calcite, fluorapatite and quartz.In this case, a fluidized-bed reactor was used for calcination, and the results showed a temperature decrease during calcination, a reduction in particle size as well as rapid quenching of sulfated samples, which resulted in an increase of P2O5 to more than 36%.
Khoshjavan and Rezai (2011) [28] examined phosphate rock samples of northern Iran that have an average P2O5 grade of 11.9% and consist of apatite (collophane), quartz, calcite and dolomite.Using calcination and subsequent two-stage flotation and desliming, a concentrate with a P2O5 grade of more than 31% P2O5 and recovery of more than 62% was obtained.However, in contrast with flotation methods, gravity separation has several advantages such as [33]: (a) a lower installed cost per ton of throughput than flotation, (b) a lower installed power requirement per ton of throughput, (c) gravity separation does not use expensive reagents, (d) the environmental impact of gravity plant effluent is considerably less than for flotation.
The present work investigates sedimentary phosphate carbonate reserves of southwest Iran that consist mainly of calcite, fluorapatite, quartz, kaolinite and illite.The phosphate ore has an average P2O5 grade of 9%-10%, classified as low-grade.As a result, various methods are used to increase concentration, such as calcination and gravity separation, and hence the research presented here aims at selecting the optimum method(s) for producing salable phosphate concentrates from low-grade deposits.
Ore Samples
The studied samples were selected from the phosphate rock reserves of the Lar Mountains in southwest Iran.The initial size of representative sample was 4 cm diameters.The degree of crushing to generate a representative sample for mineral liberation and concentration purposes had to be considered and, accordingly, optical microscopic studies were performed using a Zeiss Axioplan 2 Polarized Light Microscope (Kharazmi University, Tehran, Iran) on 45 thin sections of nine sample size fractions.The results showed that, in terms of mineral liberation, 80% of apatite ore of 100 μm (150 mesh) in size was liberated.To perform X-ray diffraction (XRD) and X-ray fluorescence (XRF) analysis, the prepared samples were sent to the Kansaran-e-Binaloud Laboratory.The results of XRD analysis are presented in Table 1 and the results of major-element XRF analysis are presented in Table 2.The results of XRD analysis show that the minerals calcite, fluorapatite and quartz are the primary minerals of the ore and kaolinite, illite and other feldspathic minerals occur at 5% or less.
Based on the tabulated results, it is evident that the Lar Mountains samples, with an initial P2O5 grade of 9%-10%, are of a very low grade, and given the high CaO concentration, can be considered as sedimentary phosphate ore with carbonate gangue.At the primary crushing stage, to avoid production of fine particles and reduce production of phosphate waste and recovery loss, phosphate representative sample with initial dimensions of 4 cm were subjected to two-stage jaw crushing in the open circuit, cone crushing in the closed circuit and finally screened to 2.4 mm (8-mesh).At the end of the crushing process, 100% of the product smaller than 2.4 mm was considered feedstock for the next stage of concentration.
Calcination
According to the results of XRD and XRF analysis, in terms of mineralogical content, it is evident that the major impurities in the phosphate ore are calcite and silica.In this study, potential issues with calcination, such as appropriate crushing to minimize the generation of slimes and determining the optimum time and temperature, were overcome by proper design of the rotary kiln, modifying the energy recovery system and finally, optimizing the overall concentration circuit.Rotary kiln applied in order to modify energy recovery system in calcination process.Fundamentally, rotary kilns are heat exchangers in which energy from a hot gas phase is extracted by the bed material.During its passage along the kiln, the bed material will undergo various heat exchange processes, a typical sequence for kilns being drying, heating, and chemical reactions that cover a broad range of temperatures.The most common configuration in rotary kiln systems is counter current flow whereby the bed and gas flows are in opposite directions.This phenomenon will be the modified energy recovery system of kilns in comparison with other mineral processing equipment.To minimize the production of slimes, a two-stage crushing circuit involving jaw crushing and cone crushing was implemented.For this purpose, a representative sample was fed into the jaw crusher and the product, after passing through a 2.4-mm screen, was fed into the cone crusher.Subsequent calcination took into account three main parameters: particle size range, temperature and duration of calcination, based on previous experiences, and laboratory testing.Optimal conditions were obtained via design tests using DX7 Software (Stat-Ease Incorporation, Minneapolis, MN, USA).Accordingly, tests using varying parameters were carried out, as follows: fractions > 2.4 mm, fractions 150 μm to 2.4 mm, fractions < 75 μm, temperatures of 850, 950 and 1050 °C, and three different durations of 60, 120 and 180 min.The resulting phosphate grade and degree of recovery of each of the tests was then evaluated.
Calcination was carried out at a semi-industrial, pilot-scale using a rotary kiln.Thermal processes in the rotary kiln provide the impetus for the physic-chemical reactions, in conjunction with using the correct combination of solid grains to ensure proper heat transfer [15].A short dry kiln was used, which required the feed particles to first be pre-heated to 130-200 °C prior to entering the main reactor [36].This pre-heating prevents an extreme temperature drop in the rotary kiln, which would lead to poor heat transfer between the solid and gas phases.The rotary drum used consisted of a steel cylinder 1.5 m in diameter and 10 m in length with countercurrent flow.The bed motion in the transverse plane of the drum is rolling, conducted at a rotation rate of 3 rpm.Studies of rotary kiln bed behavior have allowed for the development of predictively estimating bed behavior under a given operational condition.One such tool is a bed behavior diagram of Henein (1980) as shown in Figure 2 [37] presents a typical behavior for a sand bed for a 41cm (1.35 ft) diameter pilot kiln.Given the angle of repose, kiln geometry, and speed, users of such diagrams can predict what bed behavior to expect within the kiln cross section.The Froude number (Fr) assigned to the bed level in this study was considered to be between 0.5 × 10 −3 and 0.2 × 10 −1 .Also, according to the bed behavior diagram, the bed depth of particles based on a kiln rotation speed of 3 rpm and mentioned Froude number, measured 0.2 meters.As shown in Figure 3, various zones found within a rotary kiln can be discerned in the transverse plane: the active layer, including the upper part of the bed where surface renewal occurs [36] and the passive layer, which has a shear rate of zero.In the active layer, particles slide over each other via granular flow, and are returned to the top of the passive layer where particles move as a solid mass concentrically around the axis of the kiln.
To achieve the experimental objectives, particles from the crushing process were calcined at 950 °C for 120 min, determined from other experimental results and charts.To increase the residence time, some dams within the kiln were designed.After performing calcination and dissociation of carbonates, the remaining compounds enter a washing phase in which hot calcined particles at 800-950 °C are rapidly quenched with cold water, resulting in rapid heat loss release of CaO.The CaO reacts with water and forms Ca(OH)2 according to Equation (2).
CaO + H2O → Ca(OH)2 (2) After removing the Ca(OH)2 slurry, the particles were filtered and dried before entering another crushing phase in preparation for separation using a shaking table.
Shaking Table
The factors influencing effective mineral separation on a shaking table include particle size, density and shape, riffle design, deck shape and slope, water flow, feed rate and shaking motion and speed [38].In terms of particle size, it must be noted that the quenched and crushed calcined phase is screened to 150 mesh, and coarser particles that remain are then crushed to 150 mesh (based on mineralogical results and in terms of mineral liberation, 80% of apatite ore of 100 μm (150 mesh) in size was liberated), filtered and dried along with already screened particles, prior to concentration using the shaking table.
With respect to mineralogical densities, and as mentioned in Section 1.1, representative samples of the phosphate ore contain calcite, quartz, fluorapatite (collophane) and glauconite with specific gravities of 2.7, 2.65, 3.2 and 2.4 g/cm 3 , respectively.
The degree of liberation was considered the fundamental selection criterion for screening using the shaking table and, hence, parameters such as deck slope, feed water flow rate and dressing water flow rate were optimized.To achieve this, several experiments were performed on a laboratory scale and final waste, middling and concentrate products were scrutinized for the highest concentrations and recovery rates.It was found that the optimal deck slope was 10°, the optimal feed water flow rate was 8 liters per minute and the dressing water flow rate was 10 liters per minute.
Results and Discussion
The research presented here has been concerned with a pilot study on improving efficiency and recovery of higher phosphate grades from low-grade phosphate ores of southwest Iran.In the first step, samples were calcined in a rotary kiln, after crushing.The effects of three parameters on the process were assessed, namely temperature, particle size and residence time.Three size fractions were studied, namely <75 μm, 75-150 μm and >2.4 mm (results provided in Tables 3-5, respectively).For the fractional class of 75-150 μm, when considering temperature and residence time as the two key variables, it was found that the best recovery is related to a temperature of 1050 °C and residence time of 120 min.Results lead to the production of concentration with average grade of 11.9% P2O5 and recovery as 98.5% (Table 4).
For the fractional class >2.4 mm, maximum P2O5 grade is related to a temperature of 1050 °C in residence time of 120 min which led to the production of concentrate with an average P2O5 grade of 11.14% and recovery of 97.4% (Table 5).
The laboratory studies showed that these three size fractions were not ideal for calcination in the rotary kiln at a pilot-project scale.
According to the results obtained from the chemical analyses which is shown in Table 6, the ideal fraction for calcination are particles in the 0.15-2.4-mmrange.
Additionally, for the 0.15-2.4-mm-sizefraction, when considering temperature and residence time as the two key variables, it was found that the best recovery is related to a temperature of 950 °C and residence time of 120 min.Rapid quenching of the calcined samples also led to the production of concentrate with an average P2O5 grade of 17.15% and recovery of 96.04% (Figure 4).Shaking table studies performed by assigning values to variables such as particle size, desk slope, feed water flow rate, dressing water flow rate and finally base on four mentioned parameters, the grade and recovery variation versus effective parameters were investigated.Accordingly, the samples carried out from calcination tests was divided in various fractions (<100, 100-270, 270-500, 500-1000 μm) before testing.The following comparative experiments were established.The first experiment, for the fraction of 500-1000 μm, the deck slope 6°, the feed water flow rate of 8 liters per minute and table water flow rate 6 liters per minute lead to the high grade and low recovery of 40.75% and 14.71%, respectively (Table 7).
Also, results of shaking table tests in second experiment denote that separation considering the parameters such as the slope of deck as 8°, feed water flow rate of 7 liters per minute and dressing water flow rate of 7 liters per minute, for the fraction of 270-500 microns produces a concentrate with the grade (concentration) of 39.22% P2O5 and recovery of 16.38% (Table 7).
In the third experiment for the fraction of 100-270 μm, the table slope was decreased to 4° and the feed water flow rate and dressing water flow considered as 6 and 5 liters per minute respectively.These changes lead to an increase in concentrate grade but the recovery remarkably decreased to 10.74% (Table 7).
In the fourth experiment, to attain a higher grade, slope of deck was increased to 10°, the feed water flow rate was set up to 8 liters per minute and dressing water flow rate was raised to 10 liters per
Conclusions
Integrated methods of calcination and the use of a shaking table to concentrate low-grade phosphate ore (P2O5 9%-10%) from southwest Iran show that it is possible to produce concentrate with a P2O5 grade of 30.77% and recovery of 60.7%.
Short, dry rotary kilns were used to calcine the ore after preheating to temperatures of 130-200 °C.In this method, the optimum time and temperature were determined to be 120 min and 950 °C, respectively.The ideal size fraction on which to perform calcination was found to be 150 μm to 2.4 mm.With respect to shaking table mineral separation, it was found that the optimal slope of the deck was 10°, a feed water flow rate of 8 liters per minute and a dressing water flow rate of 10 liters per minute.
In conclusion, it has been shown that it is possible to effectively employ an integrated and technical method of phosphate concentration that overcomes the inherent problems of the ore, including low-grade, slimes generation and physical property similarities between constituents.Additionally, this method does not incur a high operational cost compared with other methods, and hence it is possible to acquire a marketable product of appropriate grade for further processing into fertilizer, or to produce technical-or food-grade phosphoric acid.
Figure 1 .
Figure 1.Schematic illustration of the arrangement of a riffled shaking table; note the feed input at the top right corner of the table.
Figure 2 .
Figure 2. Bed behavior diagram for a rotary kiln indicating particle behavior in relation to rotational speed, bed depth and % fill (adapted from [22]).
Figure 3 .
Figure 3. Schematic cross section of a rotary kiln drum, indicating the distribution of active and passive regions in relation to the bed and freeboard (adapted from[36]).
Figure 4 .
Figure 4.The effect of time and temperature on the grade and recovery (R) of P2O5 for the 0.15-2.4-mmfraction
Table 1 .
X-ray diffraction (XRD) analysis of a representative phosphate ore sample from the Lar Mountains.
Table 2 .
Major element geochemical analysis of a representative phosphate ore sample from the Lar Mountains.L.O.I: Loss on Ignition.
Table 3 .
Results of calcination of phosphate ore <75 μm in size.
Table 4 .
Results of calcination test for the fractional class of 75-150 μm.
Table 5 .
Results of calcination test fractional class of >2.4 mm. | 5,089 | 2015-06-25T00:00:00.000 | [
"Materials Science"
] |
Fabrication of Polyaniline Based Chemical Sensor for Ammonia Gas Detection
Polyaniline (PAni) is one of the versatile conducting polymers due to inexpensive monomer, environmental benign, high conductivity and easy preparation. In this study, PAni was doped by dioctyl sulfosuccinate sodium (AOT) to enhance the processability of PAni in gas sensor detection. PAni/AOT was synthesised via in-situ polymerisation at 0°C for 24 h in the presence of potassium peroxydisulfate (KPS). The characterisations were done by Fourier transform infrared (FTIR) and ultravioletvisible (UV-Vis). FTIR spectra depict the main characteristic peaks of PAni/AOT at 1550 cm−1 and 1455 cm−1 which indicates quinoid and benzoid units, respectively. The presence of AOT dopant was confirmed by observing the peak at 1731 cm−1. UV-Vis spectra further confirmed the PAni/AOT is in the doped state by exhibiting a characteristic peak at ~800 nm. Sensor performance of PAni/AOT film was studied in terms of selectivity, long term stability and method validation. It was found that, PAni/AOT exhibited stability up to 1 week. Besides that, PAni/AOT film also exhibited good selectivity for NH3 in the presence of common interfering species such as hexane, diethyl ether and acetone gas. In conclusion, PAni/AOT was successfully prepared for NH3 detection. The limit of detection of PAni/AOT was ~11ppm.
INTRODUCTION
Ammonia (NH 3 ) has been identified as a toxic compound with its atmospheric threshold value of 25 ppm. 1 It can cause irritation when inhaled or contacted on skin and eyes. At high concentration, it can cause temporary blindness and severe injury to mucous membrane. 2 Conducting polymer is a material that exhibits the characteristic of a metal while preserving the characteristic of a polymer. Conductive polymer has been recently used as one of the effective sensor. Amongst all of the conductive polymers, polyaniline (PAni) is one of the most promising polymer to be cultivated in gas sensor study. PAni can exhibit various oxidation state such as emeraldine salt, emeraldine base, pernigraniline and leucomeraldine. Interaction of NH 3 with PAni can alternate the properties of PAni in term of its oxidation state. A study on PAni/ESP blend has shown to have successful application on NH 3 sensing. 3 This is due to its stability, low cost and ease of synthesis. 2 However, the PAni has low mechanical strength and processing ability that hinders its performance as a gas sensor. 4 Therefore, in order to improve its processability, doping of PAni with suitable surfactant is commonly used. Up to date, various surfactants have been utilised such as camphorsulfonic acid (CSA), sodium dodecyl sulfate (SDS) and dodecylbenzenesulphonic acid (DBSA). 2 The objective of this study is to manipulate PAni/AOT as a conducting polymer which will be applied as an NH 3 sensor. The PAni will be synthesised by using chemical oxidation method and doped with dioctyl sulfosuccinate sodium (AOT). The characterisation of PAni/AOT was analysed using ultraviolet-visible (UV-Vis) and Fourier transform infrared (FTIR).The PAni/AOT sensor performance was evaluated using multimeter.
Material
Aniline (Ani) was purchased from Merck. Ammonia (NH 3 ) and potassium peroxudisulfate (KPS) were purchased from QReC and HmbG, respectively. Hydrochloric acid (HCl) and toluene were purchased from R&M Chemical. AOT was provided by University of Malaya, Malaysia. Toluene was used as solvent for PAni/AOT. Distilled water was used throughout this experiment.
Synthesis of PAni/AOT Film
The synthesis was done at 0°C for 24 h using an in-situ polymerisation technique. An amount of 7 mmol of AOT was dissolved in 1 M of HCl and stirred for 1 h.
An amount of 5 mmol of Ani was then added dropwise and stirred for 1 h. After that, 5 mmol of pre-cooled KPS was added dropwise for 1 h. The mixture was left stirred for 24 h to allow polymerisation. After 24 h, the PAni precipitate was filtered and washed several times using distilled water. The PAni precipitate was dissolved in toluene. 2 The solution was casted on a glass slide using spin coating technique. The spin-coated PAni film was used as a gas sensor in NH 3 detection.
Characterisation of PAni/AOT
PAni/AOT film was characterised by using FTIR Perkin Elmer model FTIR spectrum 100 spectrometer in the range of 650-4000 cm −1 and PG instrument T80/T80+ ultraviolet-visible (UV-Vis) in the range of 300-900 nm.
Sensor Performances of PAni/AOT Film
The sensitivity of PAni/AOT film was calculated using equation where, R i = intial resistance and R f = final resistance. 5 The long term stability was studied by performing sensitivity test of the PAni film each day for up to 1 week. 2 The sensitivity of sensor was observed by using multimeter. The selectivity study was done by exposure of PAni/AOT film with other gases such as acetone, diethyl ether and hexane. 5
Characterisation of PAni/AOT Film
The characterisation for PAni/AOT was done by FTIR and UV-Vis spectrometers. Figure 1 depicts FTIR spectrum of PAni/AOT that recorded in the range of 650-4000 cm −1 . The bands at 1235 cm −1 and 1290 cm −1 were attributed to the C-N+ stretching vibration of polaron structure and delocalisation of π electrons along the PAni backbone, respectively. 2 The presence of emeraldine salt state of PAni/AOT was confirmed by the bands at 1455 cm −1 and 1550 cm −1 that corresponds to the stretching vibration of C=N and C=C at the quinoid site and stretching vibration of C-C benzenoids in the PAni structure. 4 The AOT dopant structure confirmed by C=O and S=O symmetry at 1731 cm −1 and 1178 cm −1 , respectively. 2 The band at 3238 cm −1 and 3158 cm −1 attributed to C-H and N-H at PAni backbone. 4 The O-H arises at 3448 cm −1 due to impurities. Figure 2 shows UV-Vis absorption spectrum of PAni/AOT in the range of 300-900 nm. The spectrum exhibits three distinctive peaks at ~360 nm, ~420 nm and ~790 nm. The peak at ~360 nm indicates the π-π* conjugation of benzenoid structure. The shoulder peak at ~420 nm attributed to the polaronic character while the peak at ~790 nm contributes to the π-polaron that showed the doped state of quinoid cation. 2,4 Thus, FTIR and UV-Vis spectra confirms the PAni/AOT present in the conducting state which is better known as emeraldine salt. Polaron -π* π-π* Absorbance Figure 2: UV-Vis spectra of PAni/AOT.
Selectivity of PAni/AOT film
The selectivity study was conducted by testing the PAni/AOT sensor in different target gases such as NH 3 , acetone, diethyl ether and hexane at similar concentration. The PAni/AOT sensor showed high selectivity toward NH 3 as shown in Figure 3 compare to other gases. Based on the hydrophilic nature of PAni/AOT, the higher the polarity of vapour gas, the higher the interaction of PAni/AOT chains with vapour gases. 6 PAni/AOT is more selective toward NH 3 due to the interaction of PAni/AOT with NH 3 is ion-dipole while interaction of PAni/AOT with other gases are dipole-induced dipole interaction. Dipole-induced dipole interaction is weaker compared to ion-dipole interaction.
Long-term stability study of PAni/AOT film
The long-term stability study of PAni/AOT film in NH 3 was studied for one week and shown in Figure 4. PAni/AOT shows good sensor response (> 90%S) for 1 week. 5 However, the PAni/AOT's sensor sensitivity decreases after one week due to humidity. 7 Figure 4 also displays that PAni/AOT can be reused up to 7 days. The improper storage of PAni/AOT film would lead to an interaction with ambient air that causes degradation of the film as well. 7 with correlation coefficient of 0.9812. Limit of detection (LOD) was calculated by using STYEX method. 8 The LOD was calculated using Equation 1:
Method validation
The LOD obtained for PAni/AOT was ~11 ppm. The LOD of PAni/AOT was reliable because it is lower compared to NH 3 threshold value. Besides, PAni/ AOT film shows high sensitivity compared to reported articles and lower than the threshold value of NH 3 . 9,10 Moreover, PAni/AOT was easy to synthesise, inexpensive and easy to setup.
CONCLUSION
As a conclusion, the PAni/AOT was successfully synthesised and applied effectively in NH 3 detection. The PAni/AOT film exhibited good sensor performances in terms of long term stability and selectivity. Besides, PAni/AOT film also reported good LOD of ~11 ppm in comparison with other PAni sensors in the literature. Therefore, it has been proven that the use of AOT as a surfactant to improve the processability of PAni in sensor application is most welcomed and future studies will be directed towards comparing PAni/AOT with other PAni/surfactants. | 1,969.8 | 2018-03-15T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Reduction in the uncertainty of the neutron-capture cross section of 210Bi: Impact of a precise multipolarity measurement of the 2− → 1− main ground-state transition
The mixing ratio of the main 320-keV, M 1 + E 2 ground-state γ transition in 210 Bi has been more precisely quantified, allowing a significant reduction in the uncertainty of measurements of the neutron-capture cross section to the ground state of 210 Bi from 25% to 0.9%. Accurate values for neutron-capture cross sections to both the ground and long-lived 9− isomeric state at 271 keV in 210 Bi are of particular importance as Pb-Bi finds increased usage in Accelerator Driven Systems.
Introduction
The 210 Bi nucleus is a one-proton one-neutron particle system with respect to the doubly-magic 208 Pb core, therefore the investigations of its structure may deliver information on the properties of nuclei around closed shells. Moreover, studies of the 209 Bi(n,γ ) 210 Bi reaction are very important because Pb-Bi can be used as a coolant in fast reactor systems or as a spallation neutronproduction target in Accelerator Driven Systems. The measurements of the neutron-capture cross section are of particular interest because the 209 Bi(n,γ ) 210 Bi process contributes significantly to the short-and long-term radiotoxicity of the material used.
The 210 Bi nucleus is populated by the neutron-capture reaction in the 4605-keV state, which then emits γ -ray cascades feeding the 1 − ground state or the long-lived 9 − isomer. Beta decay of the 210 Bi ground state with the half-life of 5.013 d produces the 210 Po nucleus, which as an α emitter with T 1/2 = 138 d is a source of shortterm radiotoxicity. On the other hand, due to the large spin difference with respect to the 1 − ground state, the second excited state with a spin and parity of 9 − decays by α emission with T 1/2 = 3.04 × 10 6 y and contributes to the long-term radiotoxicity. The cross section for population of both states is of primary interest for estimating the a e-mail<EMAIL_ADDRESS>amount of long-term waste production when Bi is used in the cooling systems of nuclear reactors.
The value of the neutron-capture cross section for the isomeric state population was previously established as 17.7(7) mb [1], while the neutron-capture cross section leading to the ground state is more difficult to determine. Such studies rely significantly on a precise knowledge of the α 320 conversion coefficient of the 320-keV, 2 − → 1 − main ground-state transition. The value of α 320 conversion coefficient cannot be calculated precisely, due to the fact that it depends on the M1/E2 multipolarity mixing of the 320-keV line, which so far has not been measured with sufficient precision. In previous studies, it was inferred from theoretical considerations that the 320-keV transition could be of almost pure M1 character [1]. However, this assumption has not been confirmed experimentally. In Ref. [1], the authors report three possible values of neutron-capture cross section to the ground state: 21.5(9), 19.3 (8), and 17.2(7) mb, depending on the assumed multipolarity of the 320-keV γ transition, i.e., pure M1, 50% M1 + 50% E2, or pure E2, respectively.
We present revised calculations of the value of the neutron-capture cross section to the ground state in 210 Bi, with significantly reduced uncertainty. This was possible after defining with high accuracy the multipolarity mixing for the 320-keV line, as extracted from the γ angular correlation data collected with the HPGe EXILL array, at Institut Laue-Langevin in Grenoble (France). The analysis involved the minimization of the multivariable χ 2 function constructed from the experimental angular correlation coefficients of 7 pairs of strong γ rays in 210 Bi.
Experimental setup
The cold-neutron capture reaction on 209 Bi was used to investigate the low-lying structure of the 210 Bi nucleus. The experiment was performed at the Institut Laue-Langevin (Grenoble, France) on the PF1B cold-neutron facility. After collimation, the capture flux on target was 10 8 neutrons/(s × cm 2 ). The EXILL array consists of 16 HPGe detectors: 8 EXOGAM clovers [2], 6 GASP detectors [3] and two clovers from the ILL LOHENGRIN instrument and has been used to measure coincidences between γ rays [4,5].
The collected data were sorted offline into a γ γcoincidence matrix and a γ γ γ -coincidence cube with a time window of 200 ns. Based on the present data, the decay scheme of 210 Bi from the capture state (at 4605.2(1) keV) was established [6]. A large number of paths was found: 64 primary γ rays were identified, including 40 newly found branches. They feed the lowerlying states populating a complex level structure: a total of 70 discrete states were observed.
Angular correlations of γ rays
The 8 detectors of EXOGAM were arranged around the target each at 45 • , forming a ring in a plane perpendicular with respect to the beam. This allowed us to sort double γ γ -coincidence data into three matrices corresponding to average angles between detectors of 0 • , 45 • and 90 • . The analysis of γ -ray angular correlations provided information about transitions multipolarities, which confirmed previously known spins as well as helped with defining new assignments. We have focused on the determination of the 320-keV γ ray multipolarity, which has a significant impact on the calculations of the neutroncapture cross section to the ground state in 210 Bi.
The well-known formalism describing the anisotropy in the emission of γ rays with respect to the nuclear spin direction was applied [7,8]. The angular correlation function is usually expressed by the formula: (1) where P m (cos θ ) (m = 2, 4) are Legendre polynomials, A 0 is the normalization coefficient, while values of A m depend on the character of the two transitions considered. These parameters change with the δ k mixing ratios (k = 1, 2 indicates the number of transition), which is the ratio of intensities of L + 1 pole to L pole radiation [7]. Analysis of the non-stretched transition requires the minimization of the χ 2 cost function for δ k = 0: where the experimental A m and theoretical A theor m coefficients are compared under a particular hypothesis for the spins and multipolarities, as a function of the δ k mixing ratio. The minimum of the χ 2 function points to the most probable value of the δ k parameter. The complete information about the admixture of higher order of multipole in a given γ ray can be obtained directly from angular correlations only if the second transition from the investigated pair is pure or its mixing ratio is firmly established. Knowledge of the transitions multipolarities is rather scarce, and there is no known γ ray with a firm multipolarity assignment in coincidence with the 320-keV line. For example, one can consider the 674-320-keV cascade leading to the ground state from the 993-keV level (Fig. 1). The angular correlation function constructed for this pair of transitions is displayed in Fig. 2(a) (red curve). The discrepancy between theoretical calculations performed for the hypothesis of pure E1 − M1 cascade (yellow dashed-dotted line in Fig. 2(a)) and the experimental result suggests multipolarity mixing in one or both transitions. However, to find the δ k values in this case one must take into account other pairs of transitions being in coincidence with the 674-320-keV cascade, and use the multivariable minimization method outlined in the next section. A detailed description of this procedure can be found in [9].
Multivariable minimization
When the character of both γ rays in a cascade is not known, a minimum of 3 transitions, coincident with each other, is required in order to extract the mixing ratios by means of the angular correlation technique. We have found three very strong γ rays in coincidence with the 674-320-keV pair (i.e., the 1013-, 2505-, and 3081-keV lines). All five transitions are marked by asterisks in Fig. 1. They were combined into 7 pairs of γ rays in order to obtain 7 independent angular correlation functions. Next, the fitted A n2 and A n4 coefficients were used (Fig. 2) to construct the χ 2 n functions with the formalism given by Eq. (2) (n = 1, ..., 7 indicates a given pair of γ rays). Each χ 2 n function depends on two parameters, δ n1 and δ n2 as no value of mixing ratio is known (the mixing parameters are denoted later as e.g., δ 320 , where the index refers to the An example of the single χ 2 n function for the 674-320-keV pair of transitions is reported in Fig. 3(a). The χ 2 n function does not have any well-defined minimum, so many (δ 674 , δ 320 ) = 0 are possible in this case. Therefore, in order to define the δ 320 mixing ratio, the cost function χ 2 was constructed in the following form: where 1/ν is the normalization factor and ν is the number of degrees of freedom. The nonlinear least-square problem defined by Eq. (4) was solved by minimizing the cost function χ 2 , using the Downhill Simplex algorithm (also known as Nelder-Mead method) [10]. The minimization algorithm determined 3 lowest minima, which gives three sets of mixing ratios for investigated transitions, with very similar values of χ 2 [9]. For those three minima, only two different values of δ 320 : 0.05(2) and −3.04(13) were found. The δ 320 = −3.04 value would imply a significant (90%) admixture of E2 multipolarity, which, in consequence, would result in a lower value of the neutron-capture cross section to the ground state in 210 Bi (comparing to this value calculated assuming pure M1 multipolarity of the 320-keV γ ray). However, as discussed in [9] with respect to the measurement of the half-life of the 320-keV state (T 1/2 = 7.5(14) ps [11]), one can conclude that the δ 320 = −3.04 solution is highly unlikely (the typical T 1/2 values for E2 in the neighboring nuclei would be much longer). Therefore, for the recalculation of the neutron-capture cross section to the ground state in 210 Bi, we adopted only the value δ 320 = 0.05 (2), which confirms almost pure M1 multipolarity of the 320-keV transition.
The quality of the minimization procedure is shown in Fig. 3(b) by projecting the χ 2 cost function on the plane defined by the δ 674 and δ 320 mixing parameters. The construction of the multivariable χ 2 results in a well pronounced minimum, in contrast to the single χ 2 n function (as shown in Fig. 3, for the 674-320-keV pair of γ rays).
Recalculation of the neutron capture cross section
The value of E2/M1 mixing in the 320-keV transition can be then employed to recalculate the neutron-capture cross section to the ground state, σ gs , in 210 Bi. By adopting the standard deviation of the extracted δ 320 parameter, the 95% confidence range was calculated to be 0.024-0.076. We note that the presence of intermediate transitions in the investigated cascades may lead to an attenuation of γ -γ correlation, which would result in lower values of δ 320 mixing parameter. Therefore, we consider δ 320 = 0.076 as an upper limit and this value will be used to recalculate the neutron-capture cross section. This limit corresponds to the 0.6% admixture of E2 multipolarity in the 320-keV transition.
The cross section value can be obtained by following the analysis described in [12] and using the formula: where i I i is the sum of the intensities of the γ rays leading to the ground state, reported by Borella et al. and corrected for internal conversion by the factor (1 + α i ) (see Table 1). The σ gs cross section was calculated relative to the partial capture cross section σ 4055 = 8.07 (14) mb Table 1. The energies, assumed multipolarities and intensities of the γ rays (taken mainly from [12]) used for recalculation of the neutron-capture cross section are given in columns 1-3. Intensities marked with an asterisk come from the work [13]. Column four provides the correction factor for internal conversion.
E γ Multipolarity I γ [12] ( for the very intense 4055-keV line in 210 Bi [12]. In this calculation the intensities of the 517-and 1990-keV γ rays, not observed by Borella et al., were taken from [13]. The intensities of the 517-, 1118-, and 1981-keV γ rays were used to estimate the population of the 47-keV state feeding the ground state by a strongly converted M1 transition. The α i conversion coefficients were obtained using the BrIcc calculator [14], assuming the lowest possible order of multipole for the transitions that do not have established multipolarities. The conversion coefficient for the line carrying most of the intensity, i.e., the 320-keV transition, was calculated taking into account its E2/M1 mixing ratio equal to 0.076. The resulting value of σ gs is 21.3(9) mb. As the δ 320 = 0.076 should be considered as an upper limit, σ gs was also calculated for pure M1 multipolarity of the 320-keV line to give a value of 21.5(9) mb.
Conclusions
We have proposed a recalculation of the neutron-capture cross section leading to ground state 210 Bi. M1/E2 multipolarity was adopted for the 2 − → 1 − 320-keV line defined by an analysis based on minimization of a multivariable χ 2 cost function [9] and the intensities reported in [12]. The δ 320 = 0.076, corresponding to an upper limit of 0.6% admixture of E2, defines the lower limit of σ gs , i.e., 21.3 mb. The resulting range of possible σ gs , 21.3(9)-21.5(9) mb, has been narrowed significantly, reducing the relative uncertainty on the 209 Bi(n,γ ) 210 Bi ground state cross-section from 25% [1] to 0.9%. These resulting cross section limits may serve for accurate projections of the 210 Po inventory in nuclear reactors and Accelerator Driven Systems when using Pb-Bi as coolant. | 3,256.4 | 2017-09-01T00:00:00.000 | [
"Physics"
] |
Solution strategy based on Gaussian mixture models and dispersion reduction for the capacitated centered clustering problem
The Capacitated Centered Clustering Problem (CCCP) - a multi-facility location (MFL) model - is very important within the logistics and supply chain management fields due to its impact on industrial transportation and distribution. However, solving the CCCP is a challenging task due to its computational complexity. In this work, a strategy based on Gaussian mixture models (GMMs) and dispersion reduction is presented to obtain the most likely locations of facilities for sets of client points considering their distribution patterns. Experiments performed on large CCCP instances, and considering updated best-known solutions, led to estimate the performance of the GMMs approach, termed as Dispersion Reduction GMMs (DRG), with a mean error gap smaller than 2.6%. This result is more competitive when compared to VNS (Variable Neighborhood Search), SA (Simulated Annealing), GA (Genetic Algorithm), and CKM (CKMeans) and faster to achieve when compared to the best-known solutions obtained by Tabu-Search (TS) and Clustering Search (CS). ABSTRACT 10 The Capacitated Centered Clustering Problem (CCCP) - a multi-facility location (MFL) model - is very important within the logistics and supply chain management fields due to its impact on industrial transportation and distribution. However, solving the CCCP is a challenging task due to its computational complexity. In this work, a strategy based on Gaussian mixture models (GMMs) and dispersion reduction is presented to obtain the most likely locations of facilities for sets of client points considering their distribution patterns. Experiments performed on large CCCP instances, and considering updated best-known solutions, led to estimate the performance of the GMMs approach, termed as Dispersion Reduction GMMs (DRG), with a mean error gap smaller than 2.6%. This result is more competitive when compared to VNS (Variable Neighborhood Search), SA (Simulated Annealing), GA (Genetic Algorithm), and CKM (CKMeans) and faster to achieve when compared to the best-known solutions obtained by Tabu-Search (TS) and Clustering Search (CS).
22
Facilities are very important infrastructure within the supply chain as they support production, 23 distribution and warehousing. Due to this, many operative processes associated to facilities are 24 subject to optimization. Fields such as facility layout planning are crucial for smooth material 25 handling and production flow (Mohamadghasemi, A. and Hadi-Vencheh, A., 2012; Hadi-Vencheh, 26 A. and Mohamadghasemi, A., 2013;Niroomand, S. et al., 2015). 27 On the other hand, where to locate facilities within specific regions is a central problem for 28 strategic decisions of transportation and distribution (Chaves, A.A. et al., 2007). This is because 29 the distance between the facilities and the customers (demand or client points) is crucial to 30 provide efficient transportation and distribution services.
117
• At each iteration, Gaussian distribution-based clustering performs, for a given point, a
118
"soft-assignment" to a particular cluster (there is a degree of uncertainty regarding the as-119 signment). The centroid-based clustering performs a hard-assignment (or direct assignment) 120 where a given point is assigned to a particular cluster and there is no uncertainty.
121
Due to these differences, the GMM-based clustering was considered as an alternative to generate feasible solutions for the CCCP. In terms of the CCCP formulation described in Section Introduction a cluster can be modeled by a single Gaussian probability density function (PDF). Hence, the location "patterns" of a set of clients X can be modeled by a mixture of K Gaussian PDFs where each PDF models a single cluster. If the set contains N clients, then X = [x i=1 , x i=2 , ..., x i=N ] and the mixture can be expressed as: where k = 1, ..., K and |K| = p is the number of Gaussian PDFs, p(X|k) represents the probabilities of each Gaussian PDF describing the set of clients X (Theodoridis, S. and Koutroumbas, K., 2010), and P k is the weight associated to each Gaussian PDF (hence, K k=1 P k = 1.0). Each Gaussian component can be expressed as: where m k is the mean vector and S k is the covariance matrix of the k-th Gaussian PDF or
135
The advantage of this Gaussian approach for clustering is that faster inference about the 136 points x i that belong to a specific cluster k may be obtained considering all points. In this 137 context, it is important to mention that due to the probabilistic nature of the inference process, The EM algorithm starts with initial values for m k , S k and P k . Values for m k and S k were randomly generated as follows: where (cx min , cx max ) and (cy min , cy max ) are the minimum and maximum values throughout all 182 compressed and coded x and y coordinates respectively, and I l is the identity matrix of size l × l. 183 For P k a lower bound for K was obtained by considering the total demand of the points x i and the capacity of the clusters C k . Because all clusters have the same capacity, C k = C. Then, K and P k were obtained as follows: The stage of Expectation starts with these initial values for m k , S k and P k . An initial 184 computation of assignment or "responsibility" scores γ(z ik ) is performed to determine which x i is more likely to be associated to a particular cluster (and thus, to belong to this cluster) likelihood, then one of them is randomly assigned. An example of this assignment process is 217 presented in Figure 3.
218 = matrix of dimension k N that contains the (z ik ) scores for the set of x i points x 1 is more likely to be generated by (or being associated to) the cluster k By determining the unique assignment of each point x i to each cluster k at Step 2 of the EM algorithm (see Figure 2), the number of points assigned to each cluster is obtained. This leads to determine the cumulative demand of the points assigned to each cluster. This information is stored into the vector: where D k represents the cumulative demand of the points assigned to cluster k and it must satisfy D k ≤ C k . This vector is important to comply with the capacity restrictions because it was found that homogenization of the cumulative demands D k contributes to this objective. Homogenization is achieved by minimizing the coefficient of variation between all cumulative demands: The objective function defined by Eq. (14) is integrated within the evaluation step of the Table 1 were considered.
Manuscript to be reviewed
Computer Science doni1 1000 6 doni2 2000 6 doni3 3000 8 doni4 4000 10 doni5 5000 12 doni6 10000 23 doni7 13221 30 SJC1 100 10 SJC2 200 15 SJC3a 300 25 SJC4a 402 30 TA25 25 5 TA50 50 5 TA60 60 5 TA70 70 5 TA80 80 7 TA90 90 4 TA100 100 6 • In order to compute the error, gap or deviation from the updated best-known solutions the error metric presented by (Yousefikhoshbakht, M. and Khorram, E., 2012) was considered: where a is the cost or distance of the best solution found by the algorithm for a given instance while 287 b is the best known solution for the same instance. In this case it is important to mention that 288 the best-known solution is not necessarily the optimal solution due to the NP-hard complexity of 289 the CCCP. Initially, this metric was computed for the DRG, VNS, SA, CS, TS and GA methods 290 because the reference data was available for all sets of instances. Manuscript to be reviewed Computer Science Table 2 presents the best results of the DRG meta-heuristic for the considered instances.
293
Information regarding the runs performed by each method to report the best result is also 294 presented when available. Also, information regarding the programing language and the hardware 295 used by the authors of the reviewed methods were also included. data of the CKM and TCG methods were available. As presented in Table 3, the DRG meta-308 heuristic is more competitive than the CKM method. Also, as previously observed, the DRG is 309 more consistent.
310
When compared to the TCG method, this is more competitive than the DRG approach even 311 though the error gaps are minimal (less than 1.5%).
320
Regarding speed, Figure 6 presents the computational (running) times reported by the 321 reviewed methods. While TS and CS are the benchmark methods, these take over 25000 seconds 322 to reach the best-known solution for the largest instance. Note that for these methods, their 323 computational times exponentially increase for instances larger than 6000 points.
324
In contrast, SA is very consistent with a computational time of approximately 1000 seconds 325 through all instances. GA significantly increases for instances larger than 6000 points (up to 326 7000 seconds for the largest instance). However, these methods have the largest error gaps as 327 reviewed in Figure 5. The speed pattern of DRG is very similar to that of GA, however, as 328 reviewed in Figure 5, its error gap is the closest to the benchmark methods for instances larger 329 than 6000 points.
330
It is important to mention that this comparison may not be fair due to the differences in the 331 programming language and the hardware used for implementation and testing of all the methods.
332
In order to compare running speed all methods should be tested with the same hardware and Manuscript to be reviewed Computer Science
Manuscript to be reviewed
Computer Science | 2,238.6 | 2021-02-03T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Ablation-Free Laser Printing of Structural Colors in Reflection at 25,000 DPI
Using direct femtosecond laser patterning of metal-insulator-metal (MIM) sandwich designed to support Fabry-Perot mode in the visible spectral range we demonstrate new practically relevant strategy for high-resolution color printing. Irradiation of the MIM sandwich by tightly focused laser pulses allows to produce unique 3D surface nanostructures – hollow nanobumps and nanojets - locally modulating surface reflectivity. Laser processing parameters control the 3D shape of such nanostructures allowing to gradually tune the reflected color from reddish brown to pure green. Up-scalable ablation-free laser fabrication method paves the way towards various applications ranging from large-scale structural color printing to optical sensors and security labeling at a lateral resolution of 25,000 dots per inch.
Introduction
Unlike organic and inorganic pigments and dyes, structural colors originating from the interaction of broadband visible radiation with optically resonant nanostructures are non-fading and stable against UV radiation and thermal treatment [1,2]. Structural color technology is rather cheap in recycling, sustainable and durable, allowing to obtain vibrant (as compared to organic dyes) color tones with an extended gamut. Furthermore, utilization of optically resonant nanostructures paves the way toward high-quality imaging with the lateral resolution up to 100,000 dots per inch (DPI) that corresponds to the Abbe diffraction limit and makes the boundaries between neighboring pixels indistinguishable even with best dry microscope objectives. Lateral resolution of structural colors technology is two orders of magnitude higher comparing to those for best commercial printers. In the near future, structural colors are expected to revolutionize several technologically relevant areas as optical filters, next-generation displays, color marking and anti-counterfeiting of goods, etc. Moreover, a wide range of available materials for realization of optically resonant nanostructures makes the structural color technology potentially CMOScompatible.
Despite the well-known physics of structural colors derived from standard bulk optics as prisms, thin films, photonic crystals or diffraction gratings, the interest in structural coloring of surfaces has recently refreshed. In part, this can be attributed to recent advances in such areas as nanophotonics, plasmonics and meta-optics, as well as the spreading of high-resolution planar nanofabrication technologies as electron-and ion-beam lithography that allowed to shrink the size of the structural colored pixel well below optical diffraction limit. Using semiconductor materials with high refractive index and low dissipative losses as well as plasmon-active metals [2,3], it is possible to fabricate sub-wavelength nanostructures with resonant optical response that can be tailored by varying characteristic geometry of nanostructures and their arrangement. Along with spectrally adjustable optical response, such nano- 2 resonators can be also fabricated to have polarization anisotropy and angular dependence allowing to realize various schemes of optical information encryption [5,6].
In spite of a new look borrowed from meta-optics, majority of works used rather classical schemes based on thin-film Fabry-Perot filters [5], percolated films [7] and diffraction gratings [8] being combined with standard planar nano-resonators like nanodiscs, and nanoholes, whose planar geometry and arrangement are determined by planar fabrication technologies. Additional functionality and controllability over the single-pixel color and brightness appears to be achieved using 3D optical nanostructures. However, the ability to vary the height of the optically resonant nanostructures in each individual pixel of a color-coded picture cannot be implemented using the time-and money-consuming planar lithographic fabrication technologies.
Pulsed laser radiation is known to drive ultrafast solid-liquid phase transition on the surface of the exposed material that allows its sculpturing and subsequent resolidification in the form of various surface nano-morphologies like nanoparticles, nanoripples and nanobumps [9,10]. The latter type of structures is of particular interest as the 3D geometry of the nanobumps can be easily controlled by varying only the laser fluence and focusing conditions. Such structures can be imprinted on the surface of all plasmonactive metal films without ablation (ejection of the nanoparticles) thus ensuring ultra-clean and reproducible fabrication. Previously, by taking advantage of geometry-dependent resonant scattering of isolated nanobumps we demonstrated the direct laser writing of plasmonic colors that can be observed in the dark-field back-scattering regime [9]. Here, we show that ablation-free laser patterning of the metalinsulator-metal (MIM) sandwich with nanobumps allows to realize high-resolution (up to 25,000 DPI) structural coloring in more practically relevant reflection regime at high lateral resolution.
Material and Methods
For preliminary estimation, structural colors in reflection were generated by patterning a specially designed with FDTD numerical simulations MIM film containing 50-nm thick Au top layer (transmission ~ 6.5%), 100-nm thick Al2O3 middle layer and 500-nm thick Ag bottom layer thermally evaporated above silica glass slide (inset in the Figure 1a). The as-fabricated MIM film has reddish (as compared to the ordinary 50-nm thick Au on glass) color and represents a Fabry-Perot cavity designed to have a sharp resonant feature at 540 nm in reflection spectrum (Figure 1a).
The top layer of the MIM cavity was directly patterned using second-harmonic (515 nm) 200-fs laser pulses generated by regeneratively amplified Yb:KGW laser system (Pharos, Light Conversion). Laser pulses were focused onto the surface of MIM top layer using dry microscope objective having the numerical aperture (NA) of 0.42. The laser system was synchronized with a PC-driven attenuator (Standa) and 3D nanopositioning stages (Aerotech Gmbh.) allowing to process the surface according to programmable trajectory and vary the incident pulse energy. Morphology of the laser-printed nanostructures was correlated with laser-processing parameters using scanning electron microscopy (SEM, Ultra 55+, Carl Zeiss).
Reflectivity of the laser-patterned areas was studied using home-built microspectroscopy setup containing a bright-field optical microscope confocally aligned with a grating type spectrometer (Shamrock 303i, Andor) equipped with a thermo-electrically cooled CCD camera (Newton, Andor). Adjustable pinhole in the image plane was used to control the signal acquisition area. The reflection spectra were collected with an optical objective with NA=0.42 and normalized onto the signal from the bulk silver mirror. Reflection spectra were converted into the chromaticity coordinates of the huesaturation-value (HCV) color space using the software package developed in [9].
Results and Discussions
Single-pulse ablation of "thermally" thin glass-supported Au films was extensively studied showing the formation of parabola-or cone-shaped hollow protrusions (nanobumps) at pulse energy E below the film ablation threshold [11]. Such behavior can be explained by a sequence of laser-induced phenomena as fast melting, acoustic relaxation at the film-substrate interface and resolidification [12]. Completely similar behavior was found for top Au layer of the MIM sandwich irradiated by tightly focused femtosecond laser pulses. In particular, similar nanobumps appeared upon laser irradiation with their lateral size and the height increases with a pulse energy E, as it is illustrated by a series of side-view SEM images (Figure 1b).
Remarkably, observation of the laser-patterned areas covered with nanobumps with an ordinary bright-field optical microscope revealed their structural colors (Figure 1c). More importantly, the color was found to be tuned from reddish brown to pure green upon increase of the nanobumps size as 3 illustrated by a series of bright-field optical images collected with microscope objective at NA=0.42. The nanobumps in all the arrays were printed at a periodicity of 1 µm that ensured the homogeneously colored surface where the isolated nanostructures can not be resolved. Reflection spectra of the laser-patterned areas allowed to correlate the change of the structural color with corresponding variation of the nanobump morphology (see Figure 1d). As can be see, the increase of the nanobumps height causes the gradual decrease of the reflectivity in the yellow-red part of the optical spectrum. In contrast, blue part of the spectrum shows slower decrease of the reflectivity versus the nanobump size that explains the resulting green color of the textured surfaces printed at pulse energy of 1.8 nJ. Reflection spectra were further analyzed by converting them into the coordinates on the HCV color space to reveal the tuning range of the colors than can be reproduced using the developed laser-printing technique strategy. This data is summarized in Figure 1e illustrating that various rather pure colors can be reproduced via direct laser printing by changing the only experimental parameter -applied pulse energy. Noteworthy, imprinted structural colors of the laser-textured areas appeared similarly upon visualization with microscope objectives having variable NA that defines the collection angle of the reflected signal. Figure 1f compares the same laser-textured area produced at E=1.8 nJ upon its observation with low-and high-NA optics. As can be seen, visualization at NA=0.8 allowed to identify separate nanobumps that appeared as the green spots similarly to the whole laser-textured areas visualized at NA=0.3. In particular, this clearly illustrates possibility of the developed approach to imprint optical information and colored images at a lateral resolution up to 25,000 DPI. Finally, as an illustration of such possibility we imprinted 4 several micro-scale surface areas arranged to form "FEFU" letters visualized in the reflection mode of the optical microscope at NA=0.42 (Figure 1g).
Conclusion
In conclusion, we demonstrated a novel approach to structural coloring of surface using the ablation-free direct laser patterning of metal-insulator-metal films. Using the proposed method, we prepared a test sample and performed a preliminary evaluation of the spectral properties of the fabricated structures. The structured surfaces demonstrate colors from reddish brown to pure green in reflection spectra depending on the shape of the single nanobamps.
Overall, the tailoring of the laser pulse parameters, as well as the thicknesses and materials of metal and dielectric films, provides a potential opportunity to shift the Fabry-Perot mode supported by the MIM sandwich in the full visible spectral range, and, consequently, to obtain colors from violet to red. The relative simplicity of the proposed approach, as well as the high lateral resolution of 25,000 dots per inch, pave the way for applications in optical sensors, security labeling, etc. | 2,230.2 | 2021-11-01T00:00:00.000 | [
"Physics"
] |
Designing an Ultrathin Film Spectrometer Based on III-Nitride Light-Absorbing Nanostructures
In this paper, a spectrometer design enabling an ultrathin form factor is proposed. Local strain engineering in group III-nitride semiconductor nanostructured light-absorbing elements enables the integration of a large number of photodetectors on the chip exhibiting different absorption cut-off wavelengths. The introduction of a simple cone-shaped back-reflector at the bottom side of the substrate enables a high light-harvesting efficiency design, which also improves the accuracy of spectral reconstruction. The cone-shaped back-reflector can be readily fabricated using mature patterned sapphire substrate processes. Our design was validated via numerical simulations with experimentally measured photodetector responsivities as the input. A light-harvesting efficiency as high as 60% was achieved with five InGaN/GaN multiple quantum wells for the visible wavelengths.
Introduction
Optical spectroscopy is a versatile technique in many branches of science and engineering. Miniaturized spectrometers and their array can enable a wide range of applications, including, but not limited to, health diagnostics, biochemical sensing, security, environmental science, planetary exploration, and lab-on-a-chip systems [1][2][3][4][5][6][7][8][9][10][11][12]. A standard spectrometer design utilizes diffractive optical elements (DOE) such as a diffraction grating to separate different spectral components into different optical paths for analysis. Most DOEs have a very narrow acceptance angle for the incident light, requiring additional optical elements to collimate or filter the input signal. Collimation optics typically consist of a narrow aperture, either a slit or pinhole, and a lens, adding to the system bulkiness and weight. A spectrometer design based on absorptive spectral parsing elements was recently proposed and demonstrated using both colloidal quantum dots and epitaxial nanostructures [4,[13][14][15][16][17]. Because light absorption in nanostructures only weakly depends on the incident angle [18,19], an absorption-based spectrometer can enable a much more flexible and compact optical design, which benefits the realization of a spectrometer array for hyperspectral imaging or sensors with multiple modalities. Moreover, an optics-free spectrometer design, which opens up new opportunities such as wearable spectrometers for sweat sensor patches, also becomes possible.
An absorption-based spectrometer utilizes a series of optical filters with different absorption responses. The absorption responses from any two optical filters need to have minimal spectral correlations. Once this condition is satisfied, the photocurrent I i generated from the i-th photodetector/optical filter combination can be determined by where P j is the optical power at a wavelength λ j absorbed by the photodetector, and R ij is the responsivity, with a unit of A/W, of the i-th photodetector/optical filter combination at a wavelength λ j . Once the responsivities of individual detection elements are measured, one can invert the above equation to obtain the optical spectrum P. In practice, the measured photocurrents include noises, and, thus, the above equation may not have a solution. However, one can seek a solution that minimizes the difference between the two sides of (1). This is a topic that has been the subject of immense interests in recent years [13,[20][21][22][23][24]. As the spectra of interests are often not arbitrary, the accuracy of spectral reconstruction can be improved with a set of constraints, e.g., by assuming the optical spectrum consists of a series of Gaussian spectra with a finite linewidth. The estimation approach also allows one to recover important spectral information even when the number of photodetectors is less than the number of wavelengths [25]. The photodetector/optical filter combination can be further simplified by a photodetector design with an intrinsic tunable absorption response. For example, indium gallium nitride (InGaN) dot-in-the-wire (DIW) light-absorbing nanostructures have been shown to exhibit a tunable absorption response by changing their geometric parameters, such as diameter and shape [26]. The intrinsic strain stored in the InGaN DIW region grown on GaN due to lattice mismatch is relaxed near the surface of the nanostructure, modifying the electronic band structure and thus the absorption response. Lithographically defined InGaN DIW nanostructures can tune their absorption cut-off wavelengths from ultraviolet through near-infrared spectra [26][27][28][29]. Previously, an absorption-based spectrometer using an InGaN DIW nanostructure directly integrated with a GaN pn junction was demonstrated [17]. Such a design enables an extremely compact construction, eliminating the need for a separate silicon photodiode array. However, InGaN DIW nanostructures have very limited light absorption in the visible wavelength range due to a small absorption length. Significantly increasing the InGaN thickness will not be feasible due to a large lattice mismatch with GaN. This work proposes and demonstrates a simple strategy, utilizing the well-established sapphire substrate patterning process to greatly enhance the absorption efficiency while maintaining a large acceptance angle for the incident light. We also demonstrate that spectroscopic performance can be significantly enhanced.
Device Design
Although silicon is a ubiquitous and often preferred material for a wide range of electronic and optoelectronic applications, silicon's light absorption properties cannot be easily tuned, which is necessary for an absorption-based spectrometer. To address this limitation, the proposed spectrometer design consists of an array of GaN-based photodiodes. Individual photodiodes' absorption cut-off wavelengths are tuned by changing the nanopillar structures' diameters. The underlying principle for the absorption tuning is local strain engineering [26]. When the compressive strain in the InGaN multiple quantum well (MQW) region is relaxed at the nanopillars' sidewalls, the absorption bandedge is blue shifted due to the reduction of the quantum-confined Stark effect (QCSE). Local strain engineering has been previously applied to monolithically integrate red, green, and blue-light-emitting diodes (LEDs) on the same substrate using a single epitaxial stack [27][28][29][30]. The same principle has also enabled the realization of an absorption-based spectrometer [17]. InGaN absorbs light efficiently, exhibiting a large absorption coefficient >10 4 /cm. However, the lattice mismatch with GaN limits the total InGaN thickness to a few tens of nanometers, resulting in a low absorption efficiency and photodiode responsivity.
To increase the light harvesting, one can employ light trapping structures, such as optical cavities, a roughened surface plus a back-reflector, plasmonic structures, and photonic crystals. In this work, we investigated a simple strategy using a metal-coated patterned sapphire substrate (pss) acting as a back-reflector and light scatterer to deflect the incident light in order to increase the optical path. The process of fabricating a pss is well established and widely used in the LED industry [31]. As a result, the proposed pss as a wafer-scale light trapping structure can be fairly economical to manufacture. In the current design, we consider the pattern to be fabricated on the bottom side of the sapphire substrate, as a laser lift-off (LLO) process to separate the GaN layer from the pss substrate is still in development [32]. In the future, it may be possible for the photodiode epitaxial stack to be grown directly on a pss, which can then be coated from the underside with a metal back-reflector after substrate removal. In addition to the back-reflector, we also included optional top TiO 2 grating, which we found to further enhance light absorption in large-diameter nanopillar structures. Figure 1 shows the schematic of the proposed device structure. For the pss, we considered an array of cone-shaped, silver (Ag)-coated structures with adjustable sidewall angle φ, which can be tweaked by the resist reflow condition and the subsequent transferring of the resist pattern into the sapphire via dry etching [31]. We fixed the base diameter B at 1.2 µm and assumed the backside of the sapphire substrate was polished before the cone structures were formed. The photodiode consists of an array of nanopillars. The epitaxial stack of the nanopillar is a typical LED structure with an InGaN/GaN MQW active region sandwiched between a GaN pn junction. The nanopillar's diameter D determines the active region's absorption cut-off wavelength. In this work, we considered five pairs of 2.5 nm InGaN/12 nm GaN MQWs with an emission wavelength of 590 nm at room temperature. The absorption cut-off wavelength decreased with the nanopillar diameter and became 485 nm with a nanopillar diameter of 50 nm. To complete the photodiode, the nanopillars were first coated with 50 nm of SiN x as the insulator and the void were filled and planarized by SiO 2 . The p-type contact consists of 5 nm each of Ni and Au plus a 200 nm-thick indium-tin-oxide (ITO) as the current spreading layer. The device fabrication and photodiode characterizations were previously reported in Ref. [28].
1 µm Figure 1. Schematic of the proposed wavelength-selective photodetector. The structure consists of three parts: the lightabsorbing InGaN/GaN MQW active region, the photodetector p-type (ITO, Ni/Au) and n-type (not shown) contacts, and the light-trapping structure consisting of the silver (Ag)-coated cone-shaped back-reflector and the top TiO 2 photonic crystal (PhC) layer. An unpolarized light incidents at an angle θ. The InGaN active region and the GaN pn junction are formed by lithographically patterning a thin-film structure of the same epitaxial stack, as shown by the scanning electron micrograph on the right, with an array of InGaN DIW nanopillars with a diameter of 200 nm before filling the space between the nanopillars with the insulating Si 3 N 4 .
To evaluate the light-harvesting efficiency (LHE), which we defined as the fraction of light intensity absorbed by the InGaN active region in the wavelength range 20 nm above the cut-off wavelength, we used finite-difference time-domain (FDTD) simulations with a periodic boundary condition. We considered an unpolarized incident light at an angle θ. We included the entire epitaxial stack, including the metal layers, and assumed a transparent Ni layer. After thermal annealing of the p-type contact in air at a high temperature, the Ni layer transformed into a transparent NiO. Although the absorption in InGaN depends on the wavelength, we assumed a constant absorption coefficient of 10 5 cm −1 for simplicity. This assumption is justified, because we are mainly interested in the relative improvement in absorption after introducing the light-trapping structure. We considered five different nanopillar diameters: 50 nm, 100 nm, 200 nm, 1000 nm, and thin film, corresponding to the absorption cut-off wavelengths of 485 nm, 518 nm, 536 nm, 570 nm, and 590 nm, respectively.
Results and Discussions
We first determined the optimal cone angle φ for the back-reflector. We focused on the smallest diameter nanopillar array, as it has the lowest LHE. The edge-to-edge spacing between two adjacent nanopillars was fixed at 50 nm for all nanopillar arrays. This spacing was chosen because it can be readily patterned using modern lithographic tools. Figure 2 compares the relative LHE as a function of the cone angle. For simplicity, we performed only 2D-FDTD calculations, as the goal was to determine the optimal cone angle rather than calculating the exact LHE. The cone's base dimension was fixed at 1.2 µm. It can be seen that the LHE peaks at a cone angle range between 32 • and 37 • . We fix φ =33 • in the following discussions. The relative LHE as a function of the cone-shaped Ag back-reflector's sidewall angle φ calculated using 2D FDTD simulations. The nanopillar diameter is 50 nm. The edge-to-edge spacing between two adjacent pillars is 50 nm. The TiO 2 grating is not included.
Next, we calculated the LHE for each photodetector and compared the result to the baseline, i.e., without the back-reflector, as well as the baseline plus a flat back-reflector. As a reference, we also calculated the LHE for a device structure that was previously demonstrated experimentally, which was nothing but the baseline with a much lower density of nanopillars. Figure 3 compares the LHE's for different light-trapping designs for a normally incident light. We can see that the simple cone reflector increases the LHE for all detectors by roughly two to three times compared to the baseline. As the LHE increases, the photodetector's responsivity and the spectral reconstruction's accuracy also increase, as is discussed below. The cone reflector on the backside of the sapphire substrate can be easily and economically fabricated. However, the performance gain over a flat reflector is considerable. As light passes through the nanopillar array, light scattering occurs. The cone-shaped reflector further increases the light angle passing through the InGaN/GaN MQW region the second time, resulting in a long absorption path. Further improvements in LHE can be achieved by creating a third pass through the MQWs. We chose not to use optical cavities or other structures that can introduce wavelength dependence or/and incident angle sensitivity. Instead, we proposed to add a 200 nm TiO 2 film on top of the ITO layer. The TiO 2 film has an 83% transmittance at a 500 nm wavelength. To suppress the incident angle dependence, we added a two-dimensional photonic crystal (PhC) structure on top with a period of 200 nm and a duty cycle of 50% in both directions. The height of the PhC layer is 100 nm. It can be observed in Figure 3 that the top TiO 2 layer further improves the LHE in all detectors. Compared to the previous design in Ref. [17], the proposed pss and the TiO 2 grating in this work significantly enhance the LHE. As is discussed later, the improvement of nearly 20× in LHE for the 1000 nm diameter photodetector considerably improves the spectral reconstruction performance. It is worth noting that the TiO 2 structure introduced here is uniform across all devices and was intentionally not optimized for the optical cavity effect. A high transmittance through TiO 2 also ensures that the majority of the incident light can enter the structure. For each photodetector, the LHEs for different light-trapping designs are also shown: "prior result" corresponds to the experimental device that was previously reported without any light-trapping structure and with a low nanopillar fill factor; "high fill factor" corresponds to a nanopillar array with an edge-to-edge spacing between two adjacent nanopillars; "flat reflector" corresponds to the coating of Ag on the polished bottom side of the sapphire substrate; "cone reflector" corresponds to the addition of a cone-shaped, Ag-coated back-reflector on the bottom side of the sapphire substrate; "TiO 2 grating" corresponds to the complete light-trapping structure shown in Figure 1.
The absorption responses of different photodetectors in an absorption-based spectrometer need to be uncorrelated. In our design, the correlation is removed by varying the absorption cut-off wavelength. Figure 4 shows the broadband characteristics of the absorption of various photodetectors with different nanopillar diameters. It is observed that the absorption responses are relatively flat with respect to the wavelength for all detectors. Hence, the degree of correlation being removed is mainly determined by the absorption of each photodetector in a narrow spectral band just above the cut-off wavelength, which is why we chose to benchmark our design by defining the LHE as the fraction of light absorbed in the 20 nm wavelength window above the cut-off wavelength. Figure 4 also suggests that the spectral reconstruction performance depends strongly on individual photodetectors' LHEs. Increasing the LHEs can improve the spectral reconstruction, as is shown below. Figure 1. The absorption cut-off wavelengths are 485 nm, 518 nm, 536 nm, 570 nm, and 590 nm in the order of the nanopillar diameter starting from 50 nm. The incident light is unpolarized and has a 0 • incident angle, i.e., normal to the detector surface. Figure 5 shows the LHEs at different incident angles. We mainly focus on an incident angle from 0 • to 50 • , which corresponds a numerical aperture of 0.77. This should include most optics if they are to be used with the proposed photodetector array. One advantage of the absorption-based spectrometer compared to a DOE-based spectrometer is the incident angle insensitivity, which enables compact construction of the entire system, including the use of a high numerical aperture focusing lens. The introduction of light-trapping structures will inevitably introduce an unwanted angle dependence. As shown in Figure 5, the incident angle dependence is relatively weak for small-nanopillar detectors but not negligible for 1000 nm and thin-film devices. Even so, the angle dependence is not very strong and is expected to be easily mitigated using a high-efficiency optical diffuser.
Finally, we examined how the improved LHE impacts the spectrometer's performance. We considered 14 photodetectors with absorption cut-off wavelengths varying from 485 nm to 590 nm. Starting from the experimentally measured responsivity curves for these detectors, we estimated the new responsivity matrix R based on the calculated LHEs. To compare the spectral reconstruction's performance, we inputted a monochromatic light δ(λ), varied its wavelength λ from 460 nm to 590 nm, determined the theoretical photocurrents I p = R × δ(λ), and calculated the peak positions of the reconstructed spectra P using the non-negative least square (NNLS) algorithm with an L 1 norm || · || 1 on the following inequality [13]: The result is shown in Figure 6. Without the LHE enhancement, the spectral reconstruction has a large error between 540 nm and 570 nm. The cone reflector increases its LHE by more than 10 times, resulting in much more accurate spectral reconstruction. Fur-ther improvements are expected by increasing the number of photodetectors in the long wavelength range and the level of responsivity with improved electrical characteristics. [17]). The LHEs of these devices are shown as "prior result" in Figure 3. (b) The spectral reconstruction performance determined using the responsivity matrix R ij , where i and j correspond to the i−th photodetector and wavelength λ j , respectively; the theoretical photocurrent I i of the i−th photodetector determined by I i = R × δ(λ), where δ is the Dirac delta function; and Equation (2). The "current design" corresponds to the responsivity matrix shown in (a). The "high LHE design" corresponds to the design shown in Figure 1.
Conclusions
In summary, we have demonstrated an absorption-based on-chip optical spectrometer design based on a standard LED epitaxial stack. Using local strain engineering, the detector's absorption cut-off wavelength can be tuned geometrically by various lithographically defined arrays of nanopillars with different diameters. In spite of InGaN's high absorption coefficient, the lattice mismatch between InGaN and GaN limits the LHE of an InGaN/GaN MQWs to a few percentage points. To this end, we introduced a simple cone-shaped, Ag-coated back-reflector at the bottom side of the sapphire substrate to enhance the absorption. The interplay between the light scattering from the nanopillar array and the deflection from the cone shape significantly increase the absorption length through the active region. The LHE can be further enhanced with the addition of a thin TiO 2 film dotted with a two-dimensional TiO 2 periodic structure. Using a photodetector design previously demonstrated by experiments, LHEs of 20-60% can be obtained in the wavelength range of 450 nm-590 nm, which represents a 10-fold improvement on the prior results.
Not only can the Ag-coated cone-shape back-reflector be readily fabricated using mature patterned sapphire substrate processes, but the enhanced LHEs can lead to improved accuracy of spectral reconstruction in the situation when the number of photodetectors is smaller than the degree of freedom or the number of independent wavelengths. Moreover, the LHE enhancement to the cone-shaped back-reflector and the top TiO 2 layer exhibit weak dependence on the incident angle of light, reducing the requirement to include collimation optics as is often needed for most spectrometers. Therefore, our spectrometer design can enable an ultrathin form factor for an array of spectrometers on the chip, which can be attractive for many mobile applications.
Conflicts of Interest:
The authors declare no conflicts of interest. | 4,480 | 2021-06-28T00:00:00.000 | [
"Physics",
"Engineering",
"Materials Science"
] |
Growth-inhibitory effects of the chemopreventive agent indole-3-carbinol are increased in combination with the polyamine putrescine in the SW480 colon tumour cell line
Background Many tumours undergo disregulation of polyamine homeostasis and upregulation of ornithine decarboxylase (ODC) activity, which can promote carcinogenesis. In animal models of colon carcinogenesis, inhibition of ODC activity by difluoromethylornithine (DFMO) has been shown to reduce the number and size of colon adenomas and carcinomas. Indole-3-carbinol (I3C) has shown promising chemopreventive activity against a range of human tumour cell types, but little is known about the effect of this agent on colon cell lines. Here, we investigated whether inhibition of ODC by I3C could contribute to a chemopreventive effect in colon cell lines. Methods Cell cycle progression and induction of apoptosis were assessed by flow cytometry. Ornithine decarboxylase activity was determined by liberation of CO2 from 14C-labelled substrate, and polyamine levels were measured by HPLC. Results I3C inhibited proliferation of the human colon tumour cell lines HT29 and SW480, and of the normal tissue-derived HCEC line, and at higher concentrations induced apoptosis in SW480 cells. The agent also caused a decrease in ODC activity in a dose-dependent manner. While administration of exogenous putrescine reversed the growth-inhibitory effect of DFMO, it did not reverse the growth-inhibition following an I3C treatment, and in the case of the SW480 cell line, the effect was actually enhanced. In this cell line, combination treatment caused a slight increase in the proportion of cells in the G2/M phase of the cell cycle, and increased the proportion of cells undergoing necrosis, but did not predispose cells to apoptosis. Indole-3-carbinol also caused an increase in intracellular spermine levels, which was not modulated by putrescine co-administration. Conclusion While indole-3-carbinol decreased ornithine decarboxylase activity in the colon cell lines, it appears unlikely that this constitutes a major mechanism by which the agent exerts its antiproliferative effect, although accumulation of spermine may cause cytotoxicity and contribute to cell death. The precise mechanism by which putrescine enhances the growth inhibitory effect of the agent remains to be elucidated, but does result in cells undergoing necrosis, possibly following accumulation in the G2/M phase of the cell cycle.
Background
There is strong epidemiological evidence to support a protective role of fruit and vegetables against the development of cancer in a range of major organs, in particular of the breast and the digestive tract. This evidence has led to the isolation and characterisation of discreet dietary constituents that may be responsible for their chemopreventive activity. I3C is derived from cruciferous vegetables such as broccoli and Brussels sprouts, and possesses anticarcinogenic activity in a number of models, both in vivo and in vitro. It has received considerable attention as a potential anti-tumour agent for breast cancer, particularly due to its ability to alter the estrogen metabolite ratio of 2-hydroxyestrone to 16-α-hydroxyestrone [1][2][3], and has recently been the subject of a breast cancer prevention dose-finding pilot study using the urinary estrogen metabolite ratio as the surrogate endpoint biomarker [4]. Promising results have been obtained in phase I clinical trials of I3C against recurrent respiratory papillomatosis or cervical intraepithelial neoplasia [5,6].
I3C has also been shown to protect against carcinogen-induced tumours in a range of rodent models including liver, tongue, skin, mammary tissue and colon [7][8][9][10][11][12]. Some studies however, have suggested that when administered after chemically-induced initiation, I3C may have promoting activity in liver, raising concerns as to its suitability for use as a chemopreventive agent in humans [12][13][14][15]. The latter two of these studies showed that in medium and long-term treatment protocols (10 and 30 weeks respectively), I3C increased glutathione S-transferase-P (GST-P) positive foci in livers of rats initiated with diethylnitrosamine [15], or in a multi-organ rat model initiated with 7,12-Dimethylbenz [a]anthracene plus azoxymethane plus AFB 1 [12], and these results were interpreted as indicating a promoting effect of I3C on hepatocarcinogenesis. In a long-term (48 week) feeding study we showed that I3C prevented aflatoxin B 1 (AFB 1 )-induced liver carcinogenesis in Fisher F344 rats, irrespective of whether it was administered before or after the carcinogen [7]. Interestingly, at an earlier time point in our feeding study (13 weeks) we observed strong focal liver staining for γ-glutamyl transpeptidase and GST-P in rats treated with I3C post-initiation. Using these criteria alone as predictors of tumourigenesis, we would have expected no effect, or indeed an increase in tumours in the livers of these animals. However this was not the case, and at 48 weeks the animals treated with I3C following AFB 1 were completely protected [7]. While the mechanism by which I3C exerts tumour blocking activity has been attributed to the induction of drug metabolising enzymes, which can lead to increased conjugation and excretion of the carcinogen and decreased DNA adduct formation [16][17][18][19][20][21][22][23], the mechanisms by which it suppresses tumour promotion and progression are not well defined.
Inhibition of ornithine decarboxylase (ODC) activity has been proposed as a target mechanism for tumour suppression, reviewed by Pegg [24]. ODC is the rate-limiting enzyme in the biosynthesis of polyamines, which are required for normal cell proliferation. Polyamine levels and ODC activity are elevated in a wide range of tumours, and high levels correlate with poor prognosis in breast cancer patients [25][26][27]. ODC has further been implicated in the carcinogenic process by findings that overexpression promotes tumourigenesis in vivo, and can be sufficient for transformation of cell lines in vitro [28][29][30][31]. Therefore inhibition of ODC activity has been commonly used as a predictor of chemopreventive activity, and as an intermediate endpoint biomarker of clinical efficacy in intervention studies. The regulation of ODC activity within the cell is complex, and occurs at a multitude of levels, [32][33][34], including protein degradation, post-translational modification, mRNA translation and gene transcription. ODC protein levels are regulated by antizyme, which binds to, and targets ODC for degradation by the 26S proteasome, reviewed in [35]. ODC is also heavily regulated at the level of mRNA translation due to secondary structure in the 5' untranslated region, which can be modified by activity of the translation initiation factor eIF-4E [32,36]. The ODC promoter has been shown in various species to be regulated by a number of transcription factors, including the Wilms' tumour suppressor WT1 [37][38][39][40] and references therein. ODC activity can also be modulated in response to many signalling pathways including the epidermal growth factor receptor, phospho-inositide 3-kinase (PI3K), and estrogen receptor pathways [41][42][43][44].
In the long term feeding study mentioned above [7], the level of ODC activity in livers of rats which received dietary I3C was markedly decreased compared to that in rats on a control diet. The current study was designed to explore whether alteration of ODC activity and polyamine levels might play a mechanistic role in the chemopreventive efficacy of I3C in colon cells. To that end the effects of I3C on cell growth, ODC activity and intracellular polyamine levels were studied in human-derived colon cancer cells.
Difluoromethylornithine (DFMO) is a well-studied specific inhibitor of ODC enzyme activity. Initial trials of DFMO as a cancer therapeutic agent were disappointing, showing dose-limiting toxicity, together with little therapeutic activity. The contents of the intestinal and colonic lumen form a rich supply of polyamines, augmented by dietary sources and intestinal bacteria [45][46][47][48][49]. Exogenous polyamines are rapidly taken up by polyamine-depleted cells, and can be utilised for tumour growth [50], offering one explanation for the apparent lack of activity of DFMO in humans. However, more recently, DFMO has attracted much interest as a potential chemopreventive agent at lower doses [51][52][53][54][55][56][57][58].
In order to determine any similarity in mechanism of chemopreventive activity between I3C and DFMO the modulation of the growth inhibitory effects of the two agents by polyamines was investigated.
Cell lines and treatments
Immortalized human colon epithelial cells (HCEC) and human-derived colon carcinoma cell lines SW480 and HT29 were kindly provided by A. Pfeifer (Nestec Ltd., Lausanne, Switzerland) and C. Paraskeva (Bristol University, UK), respectively, and were cultured as described previously [59,60]. All cell lines tested negative for mycoplasma infection and were cultured without antibiotics. I3C (Sigma-Aldrich Company Ltd.; Poole, UK) was prepared as a stock solution in DMSO, and cells were treated in such a way that all control and treated cells received equal volumes of DMSO, which did not exceed a final concentration of 0.05%. For each experiment, cells were seeded in normal growth medium and were allowed to adhere for at least 4 hours before treatment. Cells were treated with concentrations of I3C from 100 µM up to 1 mM in some experiments. It should be noted that any effects seen at concentrations above 500 µM are unlikely to be physiologically relevant. All cell culture reagents (GIB-CO) were purchased from Invitrogen Ltd. (Paisley, UK).
Cell growth assays
Cells were seeded at 1 × 10 4 onto 24 well plates in normal growth medium, and allowed to adhere, prior to treatment with I3C for times up to 10 days. The culture medium was not changed during the incubation time. Cells were harvested by trypsinisation at 24 hourly intervals and counted on a Coulter ZM electronic cell counter (Beckman Coulter UK Ltd., High Wycombe, UK).
Mean IC 50 values were obtained from plots of cell number expressed as a percentage of control, versus I3C concentration following treatment for 7 days. In further growth experiments, HT29 and SW480 cells were treated with concentrations of I3C (250 or 175 µM respectively), or DFMO (125 or 50 µM; CN Biosciences (UK) Ltd., Beeston, UK), and cultured for 7 days in the presence or absence of putrescine as indicated. To determine their ability to recover proliferative capacity following treatment, cells (1 × 10 4 on 12 well plates) were cultured in the presence of I3C, plus or minus putrescine, for 24 hours, after which they were either maintained in treated medium or washed and replenished with fresh medium and allowed to recover before harvesting on day 7. The proliferation rate of cells was calculated as fold increase in cell number following the initial treatment period.
For analysis of cell cycle, 5 × 10 5 cells were seeded onto 9 cm plates and treated with I3C in the presence or absence of putrescine for 96 hours. Cells were harvested by trypsinisation and fixed overnight in 70% ethanol at 4°C, then collected by centrifugation and resuspended in PBS containing 0.1 mg/ml RNase and 5 µg/ml propidium iodide and incubated overnight at 4°C. DNA content was analysed using a Becton Dickinson FACScan and Cell Quest software, plotting 5000 events per sample. Subsequent data analysis was performed using ModFit LT software (Becton Dickinson UK Ltd.; Cowley, UK).
Measurement of phosphatidylserine externalisation
Cells were seeded at 2-5 × 10 5 onto 9 cm plates and treated with I3C for 96 hours. After treatment, cells obtained by trypsinisation were combined with those that had spontaneously detached during the incubation. Phosphatidylserine externalisation was determined by annexin V staining. Cells were pelleted and resuspended in 1 ml annexin buffer (10 mM HEPES pH 7.4, 150 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 1.8 mM CaCl 2 ). FITC-conjugated annexin V was added to a final concentration of 100 ng/ ml and cells were incubated for 8 min at room temperature, after which propidium iodide (1.5 µg) was added and cells were analysed by flow cytometry using a FACScan flow cytometer (Becton Dickinson, San Jose, CA) and Cell Quest software.
Measurement of ODC activity
Colon cells were seeded at 5 × 10 5 onto 9 cm plates and allowed to adhere prior to treatment with I3C for 24 hours. Whole cell lysates were prepared by successive rounds of freeze thawing of cells suspended in 200 µl sodium phosphate buffer (100 mM, pH 7.2). ODC activity in cell lysates (90 µl) was determined by measurement of 14 CO 2 released from labelled ornithine, under reaction conditions described previously [7]. Protein content of cell lysates was determined using the BioRad protein assay kit (Bio-Rad Laboratories Ltd., Hemel Hempstead, UK). ODC activity was calculated as pmol CO 2 produced/ mg protein/ hour, and results are expressed as a percentage of activity in control samples (DMSO-treated).
Measurement of intracellular polyamine levels
Cells seeded onto 9 cm plates at densities between 5 × 10 5 and 2 × 10 6 depending on treatment time, were treated with I3C in the presence or absence of putrescine for 24, 96 or 168 hours, and were then harvested and washed twice with PBS. Cell pellets were resuspended in 200 µl 10% trichloroacetic acid, and the acid-soluble polyamines extracted by centrifugation. Dansylated polyamines were detected by fluorescence, following separation by ionpairing reversed phase HPLC. Dansylation and extraction of polyamines was based on published methods [61,62]. In brief, extracts were mixed with 400 µl dansyl chloride (5 mg/ml in acetone) in the presence of approximately 300 mg solid sodium carbonate, together with 500 nmol of the internal standard 1,6-hexanediamine, and incubated for 20 min at 70°C. Reactions were allowed to cool to room temperature and excess dansyl chloride was sequestered by the addition of 200 µl proline (250 mg/ml). Polyamines were then extracted into 4 ml cyclohexane, and following evaporation of the cyclohexane were reconstituted in 200 µl acetonitrile. Dansylated polyamines were separated and detected using a Gilson 715 series HPLC system with a BDS C18 column (250 × 4.6 mm, particle size 3 µm, Hypersil, Runcorn, UK) coupled to a Waters 470 scanning fluorescence detector (excitation 336 nm, emission 520 nm). The mobile phase consisted of a gradient between buffer A (0.02 M 1-heptanesulphonic acid (pH 3.4), acetonitrile, methanol (5:3:2 v/v)) and buffer B (acetonitrile, methanol (3:2 v/v)) essentially as described by Aboul-Enein and Al-Duraibi [63].
Measurement of putrescine uptake
SW480 cells were seeded at 5 × 10 4 onto 24 well plates, and treated with I3C (0-500 µM) or DFMO (50 µM) in 1 ml medium for 24 hours. Cells were then treated with 10 µl 10 mM putrescine containing 0.25 µCi 14 C-labelled putrescine, and incubated for a further 30 min prior to termination of the experiment. Cells were then washed thoroughly with 1 mM putrescine, and lysed in 250 µl 1 M NaOH 60°C for 1 hour. The lysate was neutralised with 250 µl 1 M HCl, and 14 C-labelled putrescine that had been incorporated into the cells was determined by scintillation counting. Protein content of the lysates was determined using the BioRad protein assay reagent.
Measurement of intracellular glutathione
Cells were seeded at 5 × 10 4 to 1 × 10 6 onto 6 well plates (according to treatment time) and treated with I3C in the presence or absence of putrescine for 24, 96 or 168 hours. Cells were harvested by trypsinisation and intracellular glutathione levels were determined according to the method of Baker et al. [64].
Statistical analysis
Data were analysed by ANOVA using either Oneway analysis followed by Tukey's posthoc test, or the General Linear Model twoway analysis followed by Fisher's Least Significant Difference test, as appropriate [65,66].
Effect of I3C on proliferation and ODC activity of colon cell lines
I3C inhibited proliferation of the two tumour-derived colon cell lines SW480 and HT29, and of the normal-derived HCEC cell line (Fig 1). The IC 50 s (mean ± SD, n = 3; 168 hr) for the SW480, HT29 and HCEC cell lines were calculated as 123.22 ± 8.87; 127.20 ± 8.13 and 164.5 ± 7.0 µM respectively. These values are consistent with concentrations reported to exert biological activity in a range of other cell lines [67][68][69][70]. The basal level of ODC activity was low in the normal-derived HCEC line, 156.1 ± 9.3 pmol CO 2 produced/ hour/ mg protein, while levels were found to vary markedly between the two tumour cell lines. The HT29 line showed high activity, 1079.0 ± 42.4, while the SW480 line exhibited activity of 171.3 ± 20.9 pmol CO 2 produced/ hour/ mg protein. The SW480 cell line is unusual in this respect, as tumour derived cell lines commonly exhibit high levels of ODC activity. I3C decreased basal ODC activity in a dose-dependent manner in all three colon cell lines after treatment for 24 hours (Fig 2), with the HCEC line showing greatest sensitivity, and the SW480 line showing least inhibition. I3C did not inhibit ODC activity when the agent was added directly into an assay (data not shown), indicating that the agent does not act as a direct enzyme inhibitor. Interestingly, the HCEC cell line showed the least decrease in cell number in response to I3C at this time point. This lack of apparent cytotoxicity is in agreement with a recent study by Bonnesen et al. [71], which reported an IC 50 in excess of 500 µM I3C in this cell line using MTT tests carried out after 24 hours.
Effect of exogenous putrescine on growth modulation by DFMO and I3C
To determine the extent to which inhibition of ODC activity contributed to the antiproliferative activity of I3C in the tumour lines, we made a comparison of its growth inhibitory effect with that of the specific ODC inhibitor, and known antitumour agent, DFMO. The growth inhibitory action of DFMO can be reversed by exogenously added putrescine, which restores intracellular levels of polyamines required for cell proliferation. In all three cell lines, supplementation of culture medium with putrescine reversed the effects of DFMO as expected, and in agreement with data obtained for the HT29 cell line by Gamet et al. [72]. We hypothesised therefore that if the growth-inhibitory effect of I3C was directly related to its ability to decrease ODC activity, then as for DFMO, this may be reversed by supplementation with putrescine. Interestingly, putrescine did not reverse the inhibitory effect caused by I3C in any of the colon cell lines, and at concentrations that did not affect control cell growth (5-100 µM), it actually increased growth inhibition of I3C-treated SW480 cells in a dose-dependent manner ( Fig. 3; p < 0.05). This cell line was also found to be particularly sensitive to exogenously added putrescine, with concentrations in excess of 100 µM inhibiting proliferation of control cells. In contrast, the HT29 and HCEC cell lines were resistant to concentrations of exogenous putrescine up to 500 µM, whether cultured with or without I3C. The cell lines also differed in their ability to recover from treatment with I3C (Fig 4). The HT29 cell line recovered completely from 24 hour treatment with I3C up to 500 µM, and also recovered from combined I3C (250 µM)/ putrescine (500 µM) treatment. In contrast, SW480 cells only partially recovered from 24 hour treatment with I3C at 175 µM in the presence or absence of putrescine (50 µM), and were unable to recover from higher concentrations of I3C. The HCEC cell line behaved more like the HT29 line, but did not recover completely. The reason for this apparent difference in the ability of the two tumour cell lines to recover from treatment is not clear at the present time. However, these data may suggest that I3C could be acting via different mechanisms, one reversible and one irreversible; alternatively, it is also possible that the effect is time dependent and that treatment with I3C for a longer period would have resulted in irreversible growth inhibition in the HT29 cell line also. We have previously observed a difference in the ability of two breast cell lines, HBL100 and MDA MB468 to recover from I3C treatment [67], which we concluded was related to the induction of apoptosis in the latter, but not in the former cell line.
Effect of I3C and exogenous putrescine on induction of apoptosis or cell cycle arrest
In order to investigate whether I3C induces apoptosis in colon cells, and to address the possibility that the combination treatment of I3C plus putrescine may predispose SW480 cells to apoptosis, we compared levels of live, apoptotic and necrotic cells, determined by measurement of phosphatidylserine externalisation after treatment with I3C alone, I3C plus putrescine, or DFMO.
Treatment with I3C caused significant induction of apoptosis in the SW480 cell line at 250 and 500 µM after 96 hour treatment, but only slight induction was apparent with 175 µM (Fig 5). Induction of apoptosis was accompanied by an increase in necrotic cells, possibly due to secondary necrosis of previously apoptotic cells. It seems unlikely that the low level of apoptosis seen in response to 175 µM I3C after 96 hours could account for the apparently irreversible nature of growth inhibition observed for this concentration in the SW480 cell line.
The combination treatment did not further induce apoptosis, but did cause a significant increase in the proportion of cells undergoing necrosis (9.5% in the I3C + putrescine treatment compared with 4% for I3C 175 µM alone; P < 0.05). The effect of combination of putrescine with higher concentrations of I3C (250 or 500 µM) was not assessed due to the high level of toxicity encountered with the original combination treatment (I3C 175 µM + putrescine).
Putrescine alone had no effect on control cells. DFMO did not induce apoptosis or necrosis following treatment for 96 hours (data not shown).
We also investigated whether cell cycle arrest may contribute to the inhibition of cell growth at these concentrations. DFMO caused an increase in the proportion of cells in G 0 /G 1 phase of the cell cycle in both tumour cell lines, as has been previously reported in other cell lines [73][74][75]. No major phase specific block in cell cycle progression was observed following treatment with I3C alone or in combination with putrescine for 96 hours in either cell line. However, I3C caused a small but significant decrease in G 0 /G 1 , which was accompanied by an increase in the proportion of cells in G 2 /M in the SW480 cell line. Accumulation of cells in G2/M achieved significance in the I3C + putrescine treatment compared with control SW480 cells, but was not significantly elevated above levels of I3C treated cells (Fig. 6A). In order to further investigate this effect, cells were first synchronised in G2/M by treatment with nocodazole, and then released into either control or I3C ± putrescine-treated medium. Neither I3C alone, nor in combination with putrescine caused a delay in exit of cells from G2/M, however, cells in both treatment groups appeared to accumulate in G2/M by 32 hours after release ( Fig 6B). The agents did not prolong progression of cells through any phase of the cycle within 24 hours of release from nocodazole block.
Modulation by I3C of intracellular polyamines and effect of exogenous putrescine in SW480 cells
To investigate whether modulation of intracellular polyamine levels could be contributing to the sensitisation of the SW480 cells to I3C by putrescine, intracellular putrescine, spermidine and spermine levels were measured after treatment with I3C in the presence or absence of putrescine. In control cells at the 24 hour time point, spermine was the most abundant of the polyamines analysed, occurring at levels approximately 1.5 and 4 fold higher than spermidine and putrescine respectively. Exposure to I3C for 24 hr resulted in a significant decrease in intracellular putrescine levels, which was completely reversed by co-exposure of the cells to exogenous putrescine (Fig 7; Table 1). The I3C-induced decrease was transient, as by 96 hours of treatment, intracellular putrescine was found to be slightly raised above control levels. Levels of spermidine were not consistently altered at any time point during exposure to I3C (Table 1). In contrast, at each time point, levels of intracellular spermine were increased in cells treated with I3C concentrations that caused a decrease in cell number (Fig. 8), indicating a possible inverse relationship between spermine accumulation and cell growth. While co-administration of exogenous putrescine restored intracellular putrescine levels to normal, it did not alter intracellular levels of spermine in control cells, or prevent the accumulation of spermine in I3C-treated cells (data not shown). The total intracellular level of the three polyamines was also increased in response to I3C, in a pattern corresponding with that of the major component, spermine.
In a preliminary experiment (assays performed in duplicate) intracellular levels of GSH were determined in extracts from cells treated for either 24, 96 or 168 hours. No evidence for a decrease in GSH level was obtained with any treatment at any of the time points (data not shown).
Effect of I3C on uptake of putrescine by SW480 cells
In order to further investigate the effect of I3C on regulation of the intracellular putrescine pool, we determined putrescine uptake following treatment with I3C alone or in medium supplemented with putrescine. SW480 cells which had been treated with I3C under normal growth conditions for 24 hours exhibited a 50% decrease in the rate of putrescine uptake (Fig 9), which appeared to correlate with the decrease in intracellular putrescine levels shown in Fig 7. DFMO increased the rate of putrescine uptake 2.5-fold over control levels (Fig 9).
Discussion
Upregulation of ODC has been implicated as a necessary and early step in carcinogenesis. The ability of some chemopreventive agents, such as DFMO, to inhibit ODC activity is therefore thought to be an important mechanistic . * indicates a significant difference from control levels (P < 0.05; n = 8; pooled SD = 3.54). B) Cells were synchronised with nocodazole for 24 hr (Noc (24 hr)) and then released into control (Con +32 hr), I3C (I3C +32 hr) or I+P (I+P +32 hr) treated medium and analysed as described in Materials and Methods. Data from the +32 hr time point is shown. Control = DMSO treated, non-synchronised cells. * indicates a significant difference from the Con +32 hr value for each phase of the cell cycle (P < 0.05; n = 4; pooled SD = 2.61). contributor to their anticarcinogenicity. The results described above show for the first time that in colon cancer cells I3C decreases ODC activity and interferes with intracellular polyamine levels. However, there are intriguing differences between I3C and the classical ODC inhibitor, DFMO. Consistent with the known mechanism of action of DFMO, exogenously added putrescine antagonised growth inhibition caused by this agent. In contrast, while I3C also reduced ODC activity and decreased intracellular putrescine levels, its growth-inhibitory effect was not reversed, and in the case of the SW480 cell line, was actually increased in the presence of exogenous putrescine. This led us to conclude that in the colon tumour cells studied here, the growth inhibitory effect of I3C appeared to be independent of its ability to decrease intracellular putrescine levels. This conclusion is further supported by the observation that exogenously added putrescine restored intracellular putrescine levels of I3C-treated cells to normal. It is worthy of note, that while exogenously added putrescine can reverse growth inhibition due to impaired polyamine synthesis, it cannot reverse growth inhibition caused by altered polyamine catabolism or efflux.
It is possible that the decrease in putrescine observed at the initial time point, was not wholly due to inhibition of ODC activity or decreased uptake, but also due to the continued conversion of the diamine to spermidine, levels of which did not fall in response to I3C treatment. Putrescine levels in SW480 cells were also found to have reverted to control levels after 96 hours of treatment with I3C alone indicating that the cells overcome these initial effects of I3C observed at 24 hours. Several possibilities may account for this, including reversal of the inhibition of ODC activity and/or putrescine uptake observed after 24 hour treatment. I3C treatment of SW480 cells resulted in an accumulation of intracellular spermine over time (up to approximately 5 fold above control levels), which There are many reports in the literature detailing the cytotoxic effects of spermine [76][77][78][79][80][81], although it should be noted that high concentrations of exogenous spermine were used in these studies. It is possible that the accumulation of spermine seen in response to treatment with I3C contributes to its growth-inhibitory action in this cell line. Addition of exogenous putrescine, did not prevent (nor significantly increase), the accumulation of intracellular spermine levels observed in response to treatment with I3C, which may explain, at least in part, the inability of exogenous putrescine to prevent the I3C-induced growth in-hibition. Several mechanisms have been proposed for the cytotoxicity of spermine, including competition for spermidine and Mg2+ binding sites, thus preventing many of the physiological functions of spermidine and Mg2+, such as protein synthesis, as well as loss of intracellular glutathione through conjugation or spermine-induced oxidative stress, or a direct toxic effect of the spermine itself [78,80,81]. In preliminary experiments we found no evidence of a decrease in intracellular GSH levels following any treatment.
Seiler et al. [76] have reported a non-apoptotic mechanism of cell death in CaCo-2 cells that had accumulated high levels of spermine (more than double normal lev-
Figure 9
Effect of I3C on putrescine uptake in SW480 cells. Cells were treated with DMSO alone (Con), I3C or DFMO as indicated, and putrescine uptake determined as described in Materials and Methods. Results are presented as percentage of control values (n = minimum of 4) * indicates significant difference from control levels (P < 0.05). In that study, exposure to 5 mM spermine caused a decrease in the population of cells in G 1 , and an accumulation in the G 2 phase of the cell cycle [76]. In our study, at higher concentrations (≥ 250 µM), I3C caused induction of apoptosis accompanied by necrosis, but only a small increase in either was observed at the 96 hour time point with 175 µM I3C, and although we did not obtain a clear phase specific arrest with any treatment in our study, the proportion of cells in G 1 decreased and the number in G 2 /M increased in response to I3C from 32 hours. In the combination treatment (I3C + putrescine), the number of cells in G 2 /M was slightly increased and the proportion of cells undergoing necrosis was significantly increased compared with I3C treatment alone. Although each of these changes appears to be small when considered in isolation, in combination they may be sufficient to account for the inhibition caused by I3C in the cell growth experiments. At 96 hr, control cells showed combined apoptosis + necrosis of less than 4%, with approximately 8% of cells in G 2 /M; I3C (175 µM)-treated cells showed combined apoptotic and necrotic cells of 9%, with 11% in G 2 /M, while of I3C + putrescine-treated cells, approximately 16% were undergoing apoptosis or necrosis, with 13% in G 2 /M.
I3C (µM)
The mechanism by which I3C inhibited ODC activity was not further investigated in this study, but clearly occurs at a level distinct from the direct enzyme inhibition caused by DFMO. ODC activity can be regulated via many signalling pathways, including the epidermal growth factor receptor, PI3K, and estrogen receptor pathways [41][42][43][44], which are potential targets for the action of chemopreventive agents such as I3C. We have shown, for example, that I3C can inhibit the PI3K/ protein kinase B pathway in the MDA MB468 breast tumour cell line, but not in the normal derived HBL100 line [67], but this did not correlate with the ability of this agent to inhibit ODC activity in those cell lines (Howells L.M. et al. unpublished data). There are clearly multiple possible mechanisms via which I3C could exert its effect on ODC activity in these colon cell lines, including inhibition of signalling pathways such as those mentioned above.
Conclusions
Our conclusion from this work is that inhibition of ODC activity and consequent decrease in intracellular putrescine levels does not, per se, constitute a primary mechanism of action of I3C in these colon cell lines. However, perturbation of polyamine homeostasis leading to increases in intracellular spermine levels may contribute to a cytotoxic effect of the agent. The exact mechanism by which putrescine sensitised the SW480 cell line to I3C has not yet been elucidated, and the relevance of this observation to future chemopreventive strategies involving I3C remains to be determined. | 7,417.2 | 2003-01-14T00:00:00.000 | [
"Biology"
] |
Models and hierarchical methodologies for evaluating solar energy availability under different sky conditions toward enhancing concentrating solar collectors use: Texas as a case study
The precise estimation of solar radiation data is substantial in the long-term evaluation for the techno-economic performance of solar energy conversion systems (e.g., concentrated solar thermal collectors and photovoltaic plants) for each site around the world, particularly, direct normal irradiance which is utilized commonly in designing solar concentrated collectors. However, the lack of direct normal irradiance data comparing to global and diffuse horizontal irradiance data and the high cost of measurement equipment represent significant challenges for exploiting and managing solar energy. Consequently, this study was performed to develop two hierarchical methodologies by using various models, empirical correlations and regression equations to estimate hourly solar irradiance data for various worldwide locations (using new correlation coefficients) and different sky conditions (using cloud cover range). Additionally, the preliminary assessment for the potential of solar energy in the selected region was carried out by developing a comprehensive analysis for the solar irradiance data and the clearness index to make a proper decision for the capability of utilizing solar energy technologies. A case study for the San Antonio region in Texas was selected to demonstrate the accuracy of the proposed methodologies for estimating hourly direct normal irradiance and monthly average hourly direct normal irradiance data at this region. The estimated data show a good accuracy comparing with measured solar data by using locally adjusted coefficients and different statistical indicators. Furthermore, the obtained results show that the selected region is unequivocally amenable to harnessing solar energy as the prime source of energy by utilizing concentrating and non-concentrating solar energy systems.
List of symbols
Temperature at zero altitude (K) U 1 Pressure-corrected relative optical-path length of precipitable water (cm) U 3 Ozone's relative optical-path length (
Introduction
Renewable energy sources have taken increasingly significant attention these days. Particularly, solar energy that could contribute efficiently to attain the proper solution for the rapid growth problem in energy demand. The short-term solution can be through offering the sustainable system design via hybridizing solar energy with fossil fuel to sustain the existing energy resources, while the long-term solution can be the entirely replacing for the conventional energy sources to compensate the shortage in these resources. The depletion of fossil fuel resources (oil, natural gas, coal) approximately would be up to 2042, except coal which will last after 2042 [1]. The primary assessment of the potential of solar energy at a specific site is essential for selecting and designing solar energy systems (e.g., photovoltaic systems and solar concentrated collectors). However, the substantial impact of uncertainty of the solar irradiance forecast (especially, direct normal irradiance) on the solar power plants output and their profitability over time should be addressed. Moreover, much attention should been paid to the significance of acquiring hour-ahead or day-ahead forecasts of solar irradiance [2]. Accordingly, most recent studies have emphasized on attaining the best forecast accuracy based on high-quality solar irradiance data to reduce the effect of the intermittency nature of solar energy on the uncertainty in the optimal design parameters and the errors in all modeling and measurements [3][4][5].
The solar radiation that travels through the sky until reaching the earth's surface can obtain various forms: direct (beam), diffuse, and reflected (scattered) radiation based on the distance traveled through the atmosphere, the cloudiness amount, the ozone layer intensity, the concentration of haze in the air (water vapor, dust particles, pollutants, etc.), and types of ground surface [6]. Indeed, the most relevant component of solar radiation for concentrated solar power technologies (including parabolic trough, central receiver, linear Fresnel reflector, and parabolic dish) is the direct normal irradiance (DNI). Thus, the performance of the previous technologies reduces dramatically with growing cloud cover; whereas, photovoltaics can generate electric power from diffuse irradiation. Therefore, the long-term evaluation for the technical and economic performance of solar energy technologies is based on the availability of solar radiation data and their accuracy. To move successfully from the investment in small to large-scale solar projects, accurate solar radiation data are essential because small uncertainty in the measured and estimated quantity of solar radiation may jeopardize the economic feasibility of proposed solar projects [7]. Solar radiation measuring instruments (e.g., pyranometer and pyrheliometer) are utilized to obtain reliable solar radiation data over various periods of time [6]. However, the measured data may not available or easily accessible due to the high cost of instruments which used in measuring stations and the technical difficulties to calibrate these instruments, especially in developing countries.
The lack of measured DNI data at the most solar project's sites is a challenging task for researchers and workers in the field of solar energy applications. Despite the availability of global a horizontal irradiance (GHI) and diffuse (DHI) a horizontal irradiance data that can be used to obtain DNI values, there is still a need to model the solar resource in most cases. Consequently, most researchers in this field have formulated various models, regression equations, and empirical correlations to predict solar radiation based on the division basis of the time period (e.g., hourly, daily, monthly) and on the meteorological and geographic parameters. These parameters are maximum and minimum temperature, relative humidity, sunshine duration, cleanness index, cloud cover, geographical site, etc. [7]. The estimated datasets from various models, regression equations, and empirical correlations require precise validation via comparing with high-quality measured datasets. For large-scale solar projects, the importance of the mutual relationship between a lower uncertainty in solar radiation data, minimal financial risks, and profitability has been discussed in [5].
Existing models and methods
The significance of solar radiation modeling emerged through presenting numerous literature which include developing various models, regression equations, and empirical correlations to estimate solar radiation. However, the considerable abundance of models that use for obtaining the solar radiation data sets requires assessing their validity and performance. Therefore, the "Existing models and methods" of this work is allocated for introducing a comprehensive overview of the existing models and methods which included in the various literature. The two categories of solar radiation models: parametric and decomposition are used to predict beam (direct), diffuse, and global components of irradiance based on the availability of other measured or calculated quantities. The parametric (broadband) models have been formulated based on astronomical, atmospheric and geographic parameters to predict the solar irradiance precisely. Additionally, these models are the better choice than decomposition models when meteorological data are not obtainable [6,[8][9][10]. First models have been formulated and tested to estimate the amount of clear-sky direct and diffuse solar radiation on a horizontal surfaces under various climate conditions [11][12][13]. The attenuation influence of a large range of atmospheric constituents on the DNI has been studied. This study demonstrated that the major attenuation was occurred by effecting of constituents, molecular scattering, and water vapor absorption, respectively, while the ozone layer and CO 2 have a minor effect. The tested models have shown a reasonable agreement with small values of the zenith angle [14,15]. The availability of the input parameters (aerosol optical depth or Link turbidity) and implementation simplicity were used as the selection criteria for a number of clear-sky solar irradiance models and to evaluate their accuracy. The parameters, which are measured locally, were more recommended than climatic data sets to avoid underestimated values of the direct and global irradiance [16]. Several simple clear and cloudy sky models of solar global irradiance that do not need meteorological data as inputs have been evaluated. The models can be used to predict the global irradiance for the next few hours or might be for the next day. In addition, the clearsky model can be used for partially cloudy days and the estimated total cloud amount is crucial for the cloudy sky model [17]. Three types of analyses have been used to assess the validity, limitations, and performance of many clear-sky solar irradiance models. These analyses were carried out based on studying the effect of atmospheric effects (e.g., water vapor absorption, aerosol extinction), statistical evaluation, and comparison with a large number of calculated and measured data [18]. The performance of broadband models has been evaluated to identify their accuracy to predict clear-sky direct normal irradiance (DNI) by comparing with high-quality measurements along with a large range of conditions that were selected carefully. Furthermore, the uncertainty in the predicted values of DNI increase pointedly with air mass and they were more sensitive to errors in values of turbidity and precipitable water, which are the two substantial inputs of the parametric models [9,19]. The evaluation procedure, which consists of 42 stages, has been created to test 54 parametric models through the sensitivity analysis. These models can be used to compute global and diffuse irradiance on a horizontal surface. The input data for the models have been adopted from satellite measurements including ground meteorological data and atmospheric column integrated data [20]. The significant review for eighteen clear-sky models has been carried out to assess their performance by comparison between predicted values and measured values under various climate conditions. The high-quality input data were collected from five locations. The selected models can be applied to set up solar datasets, solar resource maps, and large-scale applications. All models were ranked based on their accuracy that determined by four statistical indicators. It has also been found that there is complexity in the prediction of DNI, the prediction of DHI is less accurate, and the number of the model input may not have that obvious influence on its performance and precise [21]. To select a suitable site to install the concentrating solar power plant, seventeen clear-sky models have been studied to verify which model can be used for predicting the more precise values of direct normal irradiance. The performance and accuracy of the models have been tested by comparing their predictions with measured irradiance of a specific site along with using the statistical accuracy indicators. The parametric models have been classified into two groups: simple models that are included less than three inputs (astronomical and geographical parameters) such as ASHRAE, Meinel, HLJ, etc., and complex models that are based on various parameters (the air mass, the ozone layer, aerosols, precipitable water and Linke turbidity factor) such as Bird family models. It is worth noting that simpler models can offer more accurate DNI data than complex models, in other words, an increase in the number of model inputs (e.g., atmospheric parameters) may not necessarily enhance the accuracy and performance of a model [22].
Based on the above-mentioned, the clear-sky models (parametric models) have been developed to estimate the clear-sky irradiation (in the absence of clouds). Hence, they cannot be used to predicate direct normal irradiance (DNI) under cloudy conditions. Consequently, decomposition models are based on the phenomenon of fitting the historical experimental data through empirical correlations, which are typically utilized to calculate direct normal radiation and diffuse radiation on a horizontal surface from global solar radiation data [23]. It is axiomatic that the availability of solar radiation at the earth's surface is considerably influenced by cloudy sky condition. The direct normal irradiance is attenuated significantly with increasing cloud cover and its value may be reached to zero. In contrast, once the value of cloud cover attains intermediate range values, the diffuse solar irradiance (sky radiation) starts growing in the sky until mounting to a maximum value at high range values of cloud cover, or fading to zero at the overcast sky condition [24]. Because of that, the sky state study, based on the temporal and spatial distribution modeling of clouds, is crucial to estimate the availability of all radiation types at a specific site [25]. The various concepts of cloud detection and classification have been discussed, various techniques were developed for cloud classification based on instruments (ground-based, satellite integrated) that used to determine the state of the sky [26][27][28].
Numerous types of cloud cover-based models have developed to estimate hourly and daily solar radiation using cloud cover data [2,[29][30][31]. The cloud-cover radiation model (CRM) is widely used to obtain hourly global solar irradiance forecast based on the cloud cover, which is measured in Oktas and ranging from zero Oktas (an entirely clear sky) through eight Oktas (an entirely overcast sky). The CRM was developed by Kasten and Czeplak using 10 years of hourly cloud amount data [32]. Many researchers have tested the Kasten-Czeplak model (CRM) using the dataset of various sites around the world, and to improve the model's accuracy, the locally fitted coefficients for each of the selected locations were determined by regression analysis [25,27,29,30,[32][33][34][35][36].
In order to obtain average hourly solar radiation values from long-term daily values, global solar radiation decomposition models can be used to transform daily solar radiation values into hourly solar radiation values [37]. The existing models can be divided into three categories based on parameters, physical significance, and constructing methods: the first group of models entails the time factor like solar time, day length, solar hour angle, etc. The most widely used models are the Whillier model [38], Liu and Jordan model [39], and Collares-Pereira and Rabl model [40,41], the second group of models is developed in the Gaussian function form such as Jain model 1 [42], Jain model 2 [43], Shazly model [44], and Baig et al. model [45]. Newell model [46] is the most known model of the third group of models, which is modified from the Collares-Pereira and Rabl model [8,36,47].
Other empirical models have been developed by correlating the clearness index, diffuse fraction, and meteorological parameters based on using the measured data of selected sites to estimate the global and diffuse solar irradiation. The meteorological parameters consist of sunshine period, cloud cover, minimum and maximum temperature, relative humidity, and geographical location.
The clearness index is a random parameter which can sense the meteorological stochastic effects (e.g., atmospheric aerosols, cloudiness, temperature, etc.) on the solar radiation for a time of the day, a season of the year, and a geographical site [48]. It should be noted that the clearness index is sensitive to the short-term effects (atmospheric influences which are described by statistics and the long-term effects (Earth's movement which is described by astronomy) [49]. In general, it represents the ratio of the global solar irradiance on a terrestrial a horizontal surface (which is a stochastic quantity) to the global solar irradiance on an extraterrestrial a horizontal surface (which is a deterministic quantity) for the same time and site [6,50]. In this context, the concepts of long-term of solar radiation data (either daily or monthly average daily) and short-term of solar radiation data (either hourly or monthly average hourly) can be utilized to estimate the cleanness index [6]. As already stated, the clearness index and diffuse fraction are essential factors for evaluating the impacts of cloud on extraterrestrial radiation. Therefore, they both should be considered as random variables to construct probability functions (PDF and CDF) through studying the statistical distribution of their past occurrence to predict their future values within a precise range. Based on that, several investigators have used probability function, which depends on local conditions, in modeling clearness index to predict terrestrial solar radiation and to classify the level of the sky clearness [10,39,49,[51][52][53][54][55].
The sunshine duration is another key indicator for specifying the different sky conditions along with the clearness index and cloud cover. It is the ratio of the actual (bright) hours of sunshine (which is a stochastic value) to the average daylight hours (which is a deterministic value). When the sky is completely cloudless, the bright sunshine hours will be equal to the average daylight hours and the ratio will be 1 and the majority of radiations that gained by the solar energy systems are direct normal irradiance (DNI). In contrast, on a completely or partially cloudy day, the bright sunshine hours may reach zero, thus diffuse radiation will dominate the working of solar energy systems during the time of spreading scattered thin clouds in the sky [36]. When the sunshine duration fraction is approximately 0.3-0.5, the highest diffuse radiation values typically is obtained [23]. However, the uncertainty influence of scattered clouds and their movement in the sky is still representing a great obstacle in estimating a nature and quantity of received radiations on the earth surface [56]. The estimation of sunshine duration data from cloud cover by developing an empirical correlation is quite useful to calculate global solar radiation on a horizontal surface [57]. In the same context, a simple theoretical model has been presented that represents the interrelation of sunshine duration and cloud cover fraction to predict cloud cover fraction that can be further used to calculate global solar radiation on a horizontal surface (GHI) under different sky conditions [56].
It is obvious that the performance evaluation of solar energy systems (solar photovoltaics and solar thermal applications) and selecting their optimized design depends on the availability of solar radiation data and its components. The diffuse radiation is undoubtedly a significant component besides direct normal irradiance for assessing the solar radiation quality. Hence, numerous empirical correlations have been developed to predict diffuse radiation or monthly average daily diffuse solar radiation using clearness index, relative sunshine duration, and cloud cover data [10]. The first correlation developed by Liu and Jordan [39] to estimate hourly diffuse radiation on a horizontal surface from global solar radiation, and based on the same concept, many correlations have been modified by researchers using a large amount of data from different locations over a period of years [75][76][77][78][79]. Other models have been developed for calculating monthly average diffuse solar radiation by employing regression analysis to correlate diffuse fraction with clearness index and relative sunshine duration [39,[80][81][82]. To enhance the accuracy of models for estimating diffuse solar radiation or monthly average daily diffuse solar radiation, several researchers have demonstrated the importance of adding more variables such as ambient temperature, relative humidity, cloud cover, etc. [83]. The prediction of hourly, daily, and monthly global solar radiation and its components on inclined surfaces were discussed in [48,84,85] because the maximum amount of incident solar radiation is received on inclined surfaces.
Although the quite abundant of models and evaluation methods for them that were presented by the existing literature over a few decades ago, it is rarely, in the current literature, finding proper methodologies that can be easily followed by researchers, engineers, and workers in the field of designing solar energy systems to create solar radiation datasets and to evaluate solar energy availability under different sky conditions for assorted solar radiations. Consequently, the aim of this study is to develop two hierarchical calculation methodologies for estimating hourly solar irradiance using various models, empirical correlations and regression equations. Specifically, hourly direct normal irradiance data are utilized for designing solar concentrated collectors. Additionally, the preliminary evaluation for the potential of solar energy in the selected region is carried out by performing a comprehensive analysis of the solar irradiance data and the clearness index to make a proper decision for the capability of utilizing solar energy technologies. The validation and performance evaluation of the proposed approaches for estimating solar data are carried out by using various statistical indicators while comparing with measured solar data.
Theoretical analysis
The design and operation of various solar energy technologies and their applications such as photovoltaic systems and concentrated solar thermal energy systems require obtaining high-quality solar irradiance data for a specific site at any time of a day and a year to make the long-term evaluation for the techno-economic performance for these technologies. Thus, various existing models, empirical correlations and regression equations, which have been discussed in detail in "Existing models and methods", will be investigated along with developing some regression equations in this work to predict different solar radiation types based on the time period and the meteorological and geographic parameters.
Parametric (broadband) models
A large number of parametric models are selected and then tested for accuracy fit by using statistical indicators. The existing models, which are formulated based on astronomical, atmospheric and geographic parameters, are used to predict direct normal irradiance ( I DNI ) under clear-sky condition. The performance of 22 models can be assessed by comparing their results with the measured highquality datasets through statistical indictors. These models are summarized in Table 1.
Based on the above-mentioned description of parametric models, they can be classified: a simple group, and complex group. The simple models are developed by using the zenith angle in addition to a few atmospheric parameters such as temperature, pressure and relative humidity such as Meinel Laue, Kasten and Czeplak, etc., whereas, various input atmospheric parameters such as aerosols, ozone layer and perceptible water are included in models that account as a complex group such as Davies and Hay, Hoyt (Iqbal B), etc. Table 2 is the summary of various astronomical and atmospheric parameters which are used to develop the models.
Cloud cover model (CRM)
In order to predict direct normal irradiance (DNI) under different sky conditions, the cloud-cover radiation model (CRM), which represents a regression-type model and described in detail in "Existing models and methods" can be used. The performance of this model is evaluated against the dataset extracted from a selected site. The first step toward determining DNI from the Kasten-Czeplak model (CRM) is to estimate the hourly global solar radiation (I G cs ) on a horizontal surface under a cloudless sky. The obtained value is used along with cloud cover range (measured in Oktas) to find the hourly global radiation (I G cc ) on a horizontal surface under cloud cover condition. Several instruments (groundbased, satellite integrated) are utilized to determine the sky conditions. Next, the hourly diffuse radiation (I d ) is determined to obtain the value of hourly DNI (I DNI ) under different sky conditions as described in the following formulas that are summarized in Table 3.
A hierarchical calculation methodology
Accordingly, the hourly direct normal irradiance under various sky conditions for different geographical locations can be estimated based on the previous equations, which may contribute to compensate for lack of the solar dataset for a certain site. It should be noted that the availability of DNI dataset is essential to the design and operation of concentrated solar power technologies including central receiver, linear Fresnel, dish sterling and parabolic trough collector, particularly if the expected contribution of these technologies in the total renewable energy production would be about 50.34% by 2030 [22]. The hierarchical methodology is [8,47] (2) ASHRAE model [8,9] I DNI,HLJ = I oN aa [13,22] (3) HLJ model [22] I (6) ESRA model [9] I DNI,Bird = 0.9662I oN total total = rt ot gt wt at [10,12] I DNI,MET = 0.9751I oN total All transmittances ( total ) are similar to Bird model except aerosol transmittance, (9) METSTAT model [9] I DNI,CSR = C CSR I oN total All transmittances ( total ) are similar to Bird model except aerosol transmittance, (10) CSR model [22] 1 3 summarized in Fig. 1, which can be used to predict DNI values in this work through testing fit accuracy of the selected models using statistical indicators and high-quality measured datasets.
Daily global solar radiation (decomposition models)
The decomposition models can be utilized to transform daily values (long-term data) of solar radiation into hourly [86] values (short-term data). The two frequently used correlations for this purpose were chosen. The Collares-Pereira and Rabl correlation represents the ratio of monthly average hourly global irradiance (Ī G ) to monthly average daily global irradiance (H G ) , whereas, the Liu and Jordan correlation represents the ratio of monthly average hourly diffuse irradiance (Ī d ) to monthly average daily diffuse irradiance (H d ) [6], as illustrated in Table 4. Table 5. . Four representative models were selected which are expressed as the ratio of diffuse (H d ) to global irradiance (H G ) on a horizontal surface. These are described as in Table 6.
A hierarchical calculation methodology
The implementation of calculating monthly average hourly direct solar irradiance ( (Ī DNI ) from daily data requires using a hierarchical calculation methodology that consists of multiple sequences steps as described in Fig. 2. The first step in a proposed approach is to estimate geographical and astronomical parameters (L, , hs , T, R, H) based on a selected site and period of time through using Eqs. (35,39). In order to estimate monthly average daily global irradiance on a horizontal surface ( H G ) from equations of Table 5 and monthly average daily diffuse ( H d ) on a horizontal surface from equations of (41) and (42). Eventually, to demonstrate the capability of the proposed methodology and used equations, the statistical indicators can be utilized for comparing estimated irradiance values with measured irradiance datasets.
Site description and data collection
In order to demonstrate validation of proposed methodologies and selected models to estimate reliable and highquality solar radiation data for different sites in Texas or other locations around the world, San Antonio city (29.42° N, 98.49° W) was chosen as a case study as depicted in Fig. 3 [87], which represents one of the significant Fig. 1 A hierarchical methodology of predicting DNI hotspots in the United States due to various activities of water-energy-food nexus [88] such as shale oil and gas production [89][90][91], agricultural production [92,93], etc. The solar data for San Antonio is obtained from the National Solar Radiation Data Base (NSRDB) between 1991 and 2010 are: hourly global solar irradiance, hourly direct solar irradiance, hourly diffuse solar irradiance, hourly solar incidence angle, hourly dry bulk temperature, hourly wet bulk temperature, and relative humidity.
Statistical methods of model evaluation
The performance of proposed methodologies and selected models have been tested through comparison between their estimated data and measured data by using various Collares-Pereira and Rabl [6] statistical indicators. For this purpose, five statistical indicators were applied including mean bias error (MBE), root mean square error (RMSE), absolute percent error (MAPE), coefficient of determination (R 2 ), t statistic method (t stat ), and the percentage error (e %), as given in Table 7.
Results and discussion
In this study, the monthly average daily global irradiance data on a horizontal surface, which was measured in San Antonio, Texas, during the time period 1991-2010, was analyzed to calculate the monthly average clearness index ( K T ). This index is the ratio between monthly average daily total radiation on a terrestrial horizontal surface ( H ) and monthly average daily total radiation on an extraterrestrial horizontal surface ( H o ), as defined in Eq. (47). The comparison between the obtained values from calculating ( K T ) in e time interval 1991-2010 and the values of ( K T ) that provided by Solar Energy Information Data Bank (SEIDB) [94] in the time interval 1952-1975 was carried out and its result has shown a responsible agreement, as shown in Fig. 4. Similarly, the monthly average hourly clearness index ( k t ) values are calculated and reported in Table 8, which is the ratio of the global solar irradiance on a horizontal surface (I) to the hourly extraterrestrial solar irradiance on a horizontal surface (I o ), as given in Eq. (60).
The daily clearness index can be utilized to partition days throughout the year according to the sky condition (Sunny, partly cloudy and cloudy) that dominates transmission of the extraterrestrial irradiance to the earth surface in the chosen site, as shown in Fig. 5.
In addition, the solar irradiance may be subjected to the atmospheric attenuation (absorption, diffusion) during passing through the earth atmosphere due to air pollution, cloudy conditions, and other influencing parameters. Therefore, the hourly clearness index ( k t ), which is considered as a stochastic parameter because it is a function of a period of year, seasons, climatic conditions and geographic site, can be used to predict the influence of these parameters by calculating the average daily sunshine (bright) hours based on the classification of clearness index level, as follows:
Fig. 2 A hierarchical methodology of predicting monthly average hourly direct solar irradiance
The analysis of the monthly average hourly clearness index through the classification of the clearness index level shows that more than 80% of the days can be defined as either sunny or partly cloudy and less than 20% of the days are classified as cloudy. It has been also noted that the individual monthly sky conditions percentage of sunny daytime hours exceed 40% from April through September, while the percentage of cloudy daytime hours do not exceed about 20%, as shown in Fig. 6.
It is apparent from the above-mentioned comprehensive analysis of the irradiance data and the clearness index, the selected region is characterized by a relatively high value of the monthly average percentage for sunny and partly cloudy days, which can be more than 80% throughout the year. Furthermore, the monthly average percentage of sunny daytime hours exceeds more than 50% in the interval time June-October along with a relatively high ( k t > 0.5). Consequently, the San Antonio region in Texas is unequivocally amenable to harnessing solar energy as the prime source of energy by utilizing concentrating and non-concentrating solar energy systems.
In addition to collecting the measured solar irradiance data for the implementation of the proposed methodologies and models, the average daily sunshine hour, average daily length of sunshine hours, ambient temperature and relative humidity are also essential for this purpose, as given in Table 9.
The performance of the selected parametric models (22 models) was tested by comparing its estimations with measured data. The obtained results from implementing the clearsky models on specific days for 12 months are visualized in Fig. 7a-l. It can be seen that the estimated values of hourly direct normal irradiance for most models are in favorable agreement with the measured values for all the months of the year. However, the accuracy and quality evaluation of models' performance require statistical tests for selecting the most precise models under the San Antonio climate conditions.
The results of testing the performance of 22 parametric models through using statistical indicators were tabulated in Appendix 1 (Tables 11, 12 , 13, 14, 15, 16, 17, 18, 19, 20, 21, 22). In addition to more complicated models that consist of a large number of atmospheric parameters such as Davies-Hay, Hoyt (Iqbal B) models, some simpler models like Meinel and Laue have shown a good fit accuracy for all months during the year. Also, the models can be classified into two groups based on their performance during the months of summer and winter seasons. The first group, which includes simple models with a few parameters (less than three geographic and astronomical parameters) such The impact of cloud amount on the estimation of solar irradiance on a specific month (November is chosen as a study paradigm) under the climate conditions of San Antonio, Texas, was studied by using the cloud-cover radiation model (CRM). The cloud amount utilized in this model is evaluated in Oktas, ranging from 0 to 8, and the regression coefficients of the model were obtained from [85]. It can be observed the significant influence of cloud amount on reducing the intensity of global solar irradiation as shown in Fig. 8, specifically DNI, whereas the amount of diffuse irradiance increases in the atmosphere until reaching zero under an overcast sky.
To elucidate the capability of the hierarchical calculation methodology proposed in "A hierarchical calculation methodology" for estimating DNI precisely, four formulations of the Angstrom-Prescott correlation were developed through regression analysis to determine their coefficients as shown in Table 10. The correlations accuracy was tested by [95,96], the National Solar Radiation Data Base (NSRDB), and Solar Energy Information Data Bank (SEIDB) [94] using statistical indicators, as given in Table 10. It is obvious from Figs. 9, 10, 11, 12 and 13 that the estimated values obtaining from correlations show a good agreement with measured data from different sources. In addition to the significance of monthly average daily global solar irradiance in calculating monthly average hourly direct solar irradiance on a horizontal surface by using two decompositions models that transform daily solar irradiance data to hourly solar irradiance, monthly average daily diffuse solar irradiance values are essential for the same purpose. Therefore, the validation of four selected empirical models was performed by comparing their estimated values of monthly average daily diffuse solar irradiance against the measured data. Clearly, the estimated values, which were obtained from three models including Collares-Pereira and Rabl, Liu and Jordan, Gopinathan models, are in good agreement with the measured data [95] except for Iqbal model that shows less consent with measured data, as shown in Figs. 14, 15, 16 and 17.
Based on the previously estimated values of monthly average daily global (by linear model) and diffuse (Liu and Jordan model) solar irradiance and two decomposition models. The estimated values of monthly average hourly direct solar irradiance on a horizontal surface were calculated to attain monthly average DNI values through utilizing zenith angle for this purpose. Scatter plot of the estimated values and measured data (extracted from the National Solar Radiation Database (NSRDB) and [95]) is demonstrated in Fig. 18, which exhibits a relative agreement between these values because original coefficients, which are used in Liu and Jordan model and two decomposition models, were not reconsidered for fitting locally as in Angstrom-Prescott correlation (linear model) coefficients. Therefore, an agreement value between the estimated values and measured data may be enhanced by obtaining locally fitted coefficients for used models.
Conclusions
The significant challenge for exploiting and managing solar energy is the lack of solar radiation datasets and the high cost of measurement equipment in most locations around the world. Consequently, there are quite abundant of models and evaluation methods for them that were presented by the existing literature over a few decades ago, it is rarely, in the current literature, finding proper methodologies that can be easily followed by researchers, engineers, and workers in the field of designing and operation solar energy systems to obtain required solar radiation datasets and to evaluate solar energy availability under different sky conditions for assorted solar radiations. In this study, two hierarchical calculation approaches were developed by using various models, empirical correlations and regression equations to estimate hourly DNI and monthly average hourly DNI data under different sky conditions. The calculation processes can be performed along with the presence or absence of measured solar irradiance data. Additionally, the preliminary assessment for the potential of solar energy was carried out to make a proper decision for installing concentrated solar collectors at the selected site. A case study for the San Antonio region in Texas was solved to demonstrate the accuracy of the proposed approaches for estimating hourly solar irradiance, which is utilized for designing solar concentrated collectors. The obtained results from the study are presented as follows: • Based on the preliminary assessment for the potential of solar energy for the selected location by performing the comprehensive analysis. The San Antonio region in Texas is unequivocally amenable to harnessing solar energy as the prime source of energy by utilizing concentrating and non-concentrating solar energy systems because the analysis of the monthly average hourly clearness index through the classification of the clearness index level shows that more than 80% of the days
Appendix 1
The results of testing the performance of 22 parametric models through using statistical indicators are shown in Tables 11, 12 | 8,250.2 | 2019-11-27T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
A new mechanism to use the Conditions Database REST API to serve the ATLAS detector description
Efficient and fast access to the detector description of the ATLAS experiment is needed for many tasks, at different steps of the data chain: from detector development to reconstruction, from simulation to data visualization. Until now, the detector description was only accessible through dedicated services integrated into the experiment’s software framework, or by the use of external applications. In this work, we explore the possibility of using a web access-based conditions database to store and serve the detector description, aiming at a simplification of the software architecture of the experiment and a reduction in the number of software packages to be maintained.
Introduction
Currently, the detector description of the ATLAS experiment [1] is built by fetching geometry data from an Oracle database through the experiment's framework, Athena [2]. On average, about 300 SQL queries are needed to build the complete geometry. Moreover, the geometry is built on-the-fly and stored in-memory only, thus the process needs to be repeated every time the detector description has to be accessed by a data-processing job.
Recently, new developments decoupled the geometry core software from the experiment's framework and created a mechanism to get a persistent copy of the detector description, together with adding new serialization formats [3].
Leveraging these new developments, and with a view to further decoupling from the framework, a new mechanism is presented in this proceedings, which uses the new conditions database CREST [4,5] to serve the detector description through a HTTP REST API.
The goals of the new prototype are the following: to provide a lighter software stack in the Athena framework to access geometry via a simple REST service and to reduce the amount of network operations to serve the geometry to clients. Figure 1: a) the architecture currently implemented in ATLAS: the detector description is built on-the-fly, taking the definition of the GeoModel tree from C++ code and the geometry parameters from a dedicated Oracle database though a dedicated service; all operations are performed within the experiment's framework. b) The current actors and actions involved in the retrieval of geometry parameters from the Oracle DB. Today, about 300 SQL queries are needed to retrieve the full set of geometry parameters, performed by each of the jobs processing the data.
Parameters are used to further customize the base shapes/structure used in the GeoModel tree; they are stored in the Oracle-based GeometryDB [7]. Different sets of parameters can be saved in the GeometryDB, to store different configurations of the detector geometry.
When requested by a data-processing job (see Figure 1a), the GeoModel tree-defined in C++ code for each of the ATLAS subsystems-is traversed; then, the tree nodes-i.e., the detector elements-are customized with the parameters extracted from the GeometryDB, and the geometry is built on-the-fly. All these operations are performed through the experiment software framework, Athena. The resulting detector description is then stored in-memory and used by the requesting job.
The GeometryDB database consists of a set of tables dedicated to parameters for individual detector elements and each set of parameters is identified via a dedicated tag. A set of valid tags at a given moment is identified via a unique parent tag, called geometry tag. The schema implementation is then based on a hierarchical versioning: a given geometry tag identifies a frozen version of the geometry parameters. Geometry parameters are then accessed via SQL queries , starting from the unique entry point provided by the geometry tag.
Access to geometry parameters from Athena
Today, the geometry parameters from the Oracle database are accessed through a set of SQL queries: as said, about 300 queries are needed to retrieve the full set of parameters for a given geometry tag. In addition, due to the fact that the detector geometry is built on-the-fly, those queries need to be performed by each job which needs the access to the detector's description.
A dedicated service acts as interface between the GeoModel and the Oracle DB (see Figure 1b). All queries are performed via Coral [8] (a C++ access layer to the database) and executed via Frontier/SQUID [9], in a context of distributed computing.
In HEP experiments, there is a distinction between raw and readout geometry; the Ge-oModel library describes the geometry of the single elements which compose the detector, without describing how those elements are assembled and their signal read out; that part, in fact, is defined and customized at a later stage by each sub-system of the experiment. In this paper, we only address the raw geometry, that is the collection of geometrical data which describe the bare detector elements.
A standalone geometry description
Being described in C++ code and built on-the-fly, the geometry was not queryable so far: the GeometryDB, in fact, stores properties and parameters; but all the relationships between nodes, which define the structure of the detector geometry, are stored inside the C++ description only. Moreover, the tight integration of the geometry mechanism with the ATLAS software framework prevented, so far, lightweight access to the experiment's geometry.
The recent decoupling of the GeoModel library from the framework, and the possibility of dumping the full experiment's geometry to file, made possible the exploration of alternative methods to access, read, explore, debug, and visualize the experiment's geometry, opening up many possibilities to further development. Two new exporters have been implemented: SQLite and JSON. Further information can be found in the above mentioned Ref. [3].
By using the new exporters (see Figure 2), we can dump the full detector description from memory into new data structures for any given geometry tag: the resulting file contains the customized version of the geometry defined by the properties defined by the tag. Of course, saving a full geometry description for each tag leads to redundant information; but it gives us the possibility of having a persistent copy of a geometry version, to be saved, shared, served, explored, and debugged without accessing the experiment framework. This is specially useful for data-processing jobs which need to perform read operations only.
New ways of building and serving the detector geometry
The new standalone geometry representation lets applications easily read and store a persistent copy of the detector description, without the need of the experiment's framework. The new mechanism presented here leverages this new feature, by using the HTTP REST API of the CREST [10] conditions database to efficiently serve the full detector geometry to clients which need read-only access to it, without the need of extra layers.
REST access to the geometry
The access through a HTTP REST API let applications running outside the experiment's framework access the information about the detector geometry (see Figure 2). However, it brings other benefits: it greatly simplifies the access to the detector description itself, for read-only operations, for both standalone and integrated applications. This has two main advantages: a reduced number of database queries, which can be reduced from O(∼300) to O(∼1); and the possibility of accessing the geometry through a REST API, removing the need to include specific SQL knowledge in the client code.
The new REST API helps to simplify the architecture of code running inside the experiment's framework, as well. In fact, while the Oracle-based GeometryDB has a table structure well suited for managing the storage and the versioning of all the different pieces of information about the detector's geometry, we do not need the same kind of complexity in read-only operations performed in a typical data-processing job running inside the Athena framework. Hence, the use of a REST API improves the development and the maintenance of applications, both Athena-based and standalone.
Storing geometry data in the Conditions database
For this prototype we have been using the proposed CREST database infrastructure that AT-LAS is evaluating as a prototype for the future data-taking periods. However, similar conclusions do apply to the existing COOL [9] data model as well, and can thus be used in the upgrade of the current system.
The new standalone geometry mechanism, as said, is currently able to produce a persistent copy of the detector description in two formats: SQLite and JSON. The JSON version has been chosen for this prototype and the file is represented in the Conditions DB as a blob type, associated to a single, so-called Interval of Validity (IOV, that defines the time over which the data is declared "valid", and which spans from 0 to Infinity) and to a single tag, which could correspond to the previous mentioned geometry tag.
For these tests, the prototype of the CREST database deployed on Openshift [11] at CERN has been used, and Python client libraries have been used to interact with the CREST server via REST [12].
An overview of the new architecture
The new CREST REST API has been built using the Swagger OpenApi [13]-an interface description language (IDL), which lets users efficiently specify the specifications of a REST service. The actual code of the REST interface is then automatically generated based on the input specifications, either at server or client level, in several languages and frameworks (See Figure 3).
For this prototype [14], a C++ client library has been generated, based on the Qt framework [15], which can connect to the CREST server via HTTP using the REST API, without the need of additional components.
The new architecture using the Conditions database to serve the full details of a given geometry tag stored in a JSON file offers a number of advantages (See Figure 4). In grey, the current system to retrieve the geometry data from the Oracle-based GeometryDB, through the experiment's framework. In the middle, the new architecture, which uses an HTTP REST API to retrieve geometry data from the CREST conditions database, where are stored as JSON blob. The same REST API could be used by standalone applications as well (on the right, in the drawing). At the bottom, a future extension of the architecture is foreseen, querying and filtering geometry data from a Neo4j, graph-database instance.
The information about the detector description is provided to clients with a single call to the REST API. Only a single request is necessary, as opposed to the O(300) SQL queries required by the current system.
The new architecture also helps to simplify the structure of the ATLAS Athena services. For simple read-only operations, in fact, a single database can be used, the Conditions database, to retrieve both conditions and geometry data. This simplifies the software components maintenance, as well.
The new architecture also reduces the dependency on SQL libraries. By using a REST service to read the geometry data, instead of performing a set of SQL queries, the typical job, de facto, should not even be aware that is accessing a database. For a given geometry tag ID, in fact, clients can access the full geometry information with a single HTTP call-e.g., http://crest-undertow.web.cern.ch/crestapi/payloads/<ID>.
The SQL layer will still exist, but would be used only for the management of the geometry parameters in the Oracle-based database (or other relational databases), which is a completely different use-case from loading a geometry in memory, for example, for reconstruction or data visualization jobs.
Conclusions and plans
The new architecture explores the use of the Conditions database to efficiently store and serve the detector description. This, as seen, offers a number of advantages.
The first prototype of the new architecture has been developed and the first tests were successful. Now, in order to further develop and optimize the new system, a set of additional steps need to be performed. A set of connection tests will be run to measure the performance of the new access system compared to the one currently used in ATLAS. After that, based on the achieved results, the new system will be tested within the ATLAS framework, as a potential replacement of the existing mechanism, to load and build geometry data in jobs which only needs to perform read-only operations on the detector description.
After that, the REST approach will be further explored for other future geometry backends. One foreseen extension is the addition of a Neo4j [16,17] backend, in order to offer advanced querying and filtering functionalities on detector description data. | 2,833.6 | 2019-07-01T00:00:00.000 | [
"Computer Science"
] |
Lagrangian formulation of the massive higher spin N=1 supermultiplets in $AdS_4$ space
We give an explicit component Lagrangian construction of massive higher spin on-shell $N=1$ supermultiplets in four-dimensional Anti-de Sitter space $AdS_4$. We use a frame-like gauge invariant description of massive higher spin bosonic and fermionic fields. For the two types of the supermultiplets (with integer and half-integer superspins) each one containing two massive bosonic and two massive fermionic fields we derive the supertransformations leaving the sum of four their free Lagrangians invariant such that the algebra of these supertransformations is closed on-shell.
Introduction
The higher spin theory (see e.g. the reviews [1] [2], [3]) has attracted significant interest for a long time and for many reasons. On the one hand, the theory of massless higher spin fields is a maximal extension of the Yang-Mills gauge theories and gravity including all spin fields. On the other hand, it is closely related to superstring theory which involves an infinite tower of higher spin massive fields. In principle, the higher spin field theory can provide the possibility to study some aspects of string theory in the framework of the field theory. It is also worth pointing out that the construction of Lagrangian formulations for the higher spin field models is extremely interesting itself since it allows to reveal the new unexpected properties to relativistic field theory in general.
Beginning with work [4] it became clear that the nonlinear massless higher spin theory can only be realized in AdS space with non-zero curvature. This raises the interest in studying the various aspects of field theory in AdS space in the context of AdS/CF Tcorrespondence. Taking into account that the low-energy limit of superstring theory should lead to supersymmetric field theory we face the problem of constructing the supersymmetric massive higher spin models in the AdS space. It is expected that the supersymmetry can be an essential ingredient of the consistent theory of all the fundamental interactions including quantum gravity. It is possible that such a theory should also involve the massless and/or massive higher spin fields. This paper is devoted to developing the N = 1 supersymmetric Lagrangian formulation of free massive higher spin models in AdS space in the framework of on-shell component formalism.
In supersymmetric theories the massless or massive fields are combined into the corresponding supermultiplets. In the case of free field models containing the different spin fields, it is natural to expect that the Lagrangian should be the sum of the Lagrangians for each concrete spin field. To provide an explicit Lagrangian realization of the free supermultiplet one has to find supertransformations leaving the free Lagrangians invariant and show that the algebra of these supertransformations is closed at least on-shell. In the case of the N = 1 supersymmetry the massless higher superspin-s supermultiplets consist of the two massless fields with spins (s, s + 1/2). The task of constructing supertransformations for such supermultiplets in four dimensional flat space was completely solved in the metric-like formulation [5] and soon in the frame-like one [6]. In both cases the supertransformations have a simple enough structure and are determined uniquely by the invariance of the sum of the Lagrangians for two free massless fields with spins s and s + 1/2. Note that such a requirement allows to find only on-shell supersymmetry when supertransformations are closed on the equations of motion. In order to find off-shell supertransformations, it is necessary to introduce the corresponding auxiliary fields.
A natural procedure to construct off-shell N = 1 supersymmetric Lagrangian models is realized in terms of N = 1 superspace and superfields (see e.g. [7]), where all the auxiliary fields providing closure of the superalgebra are automatically obtained. In the framework of superfield formulation the N = 1 supersymmetric massless higher spin models were constructed in the pioneer papers [8,9]. Later, on the basis of these results, N = 1 supersymmetric massless higher spin models were generalized for AdS 4 space [10], [11] 1 . In both cases the constructed superfield models, after eliminating the auxiliary fields, reduce to the sum 1 Application of this formulation for quantization of the N = 1 higher spin superfield model in AdS 4 space of spin-s and spin-(s + 1/2) (Fang)-Fronsdal Lagrangians [20,21] in flat or AdS spaces. The generalization for N = 2 massless higher spin supermultiplets was given in [11], [22].
There are much fewer results in the case of supersymmetric massive higher spin models even in the on-shell formalism, the reason being that when moving from the massless component formulation to the massive one very complicated higher derivative corrections must be introduced to the supertransformations. Moreover the higher the spin of the fields entering a supermultiplet the higher the number of derivatives one has to consider. The problem of the supersymmetric description of the massive higher spin supermultiplet was only explicitly resolved in 2007 for the case of N = 1 on-shell 4D Poincare superalgebra [23] (see also [24][25][26]) using the gauge invariant formulation for the massive higher spin fields [27][28][29] 2 . In such a formalism the description for the massive field is obtained in terms of the appropriately chosen set of the massless ones. It is assumed that the Lagrangian for massive higher spin supermultiplets is constructed as a sum of the corresponding Lagrangians for massless fields deformed by massive terms. However, it appeared [23] that to realize such a program one has to use massless supermultiplets containing four fields (k − 1/2, k, k ′ , k + 1/2) as the building blocks, where two bosonic fields with equal spins have opposite parities, and this prevents us from separating them into the usual massless pairs. In [23] it was shown that to obtain the massive deformation it is enough to add the non-derivative corrections to the supertransformations for the fermions only. Complicated higher derivative corrections to the supertransformations reappear when one tries to fix all local gauge symmetries, breaking the gauge invariance. Note however that in such construction the mass-like terms for the fermions in the Lagrangian take a complicated non-diagonal form making calculations rather cumbersome. Surprisingly however, in 4D the above results remain the main results in the massive supersymmetric higher spin theory until now 3 . The aim of this paper is to extend and generalize the results of [23] to the case of four dimensional N = 1 AdS 4 superalgebra.
We use the gauge invariant description of the massive higher spin bosonic and fermionic fields but in the frame-like version [38,39]. Recall that one of the attractive features of such a formalism is that it works nicely both in flat Minkowski space as well as in (A)dS spaces. Our strategy differs from that of [23]. For the Lagrangian we take just the sum of four free Lagrangians for the two massive bosonic and two massive fermionic fields entering the supermultiplet. Then for each pair of bosonic and fermionic fields (we call it superblock in what follows) we find the supertransformations leaving the sum of their two Lagrangians invariant. Next we combine all four possible superblocks and adjust their parameters so that the algebra of the supertransformations is closed on-shell.
The paper is organized as follows. In section 2 we give all necessary descriptions of the frame-like formulation of massless bosonic and fermionic higher spin fields and alos we present the massless higher spin supermultiplets in AdS 4 in such a formalism. Massless models given in this section will serve as the building blocks for our construction of the massive was considered in [12]. The superfield approach was recently applied for construction of the higher spin supercurrents [13], [14], [15]. [16], [17], [18], [19]. 2 Another gauge invariant approach to Lagrangian formulation of massive higher spin fields is given on the base of BRST construction [30], [31], [32], [33]. higher spin models. In section 3 we give frame-like gauge invariant formulations for free massive arbitrary integer and half-inter spins. In section 4 we consider massive superblocks containing one massive bosonic and one massive fermionic field and find corresponding supertransformations. In section 5 we combine the constructed massive superblocks into one massive supermultiplet. Notations and conventions. In this work we use a technique of p-forms taking the values in the Grassmann algebra. The main geometrical objects are some p-forms Ω (p=0,1,2,3,4). They are defined as where Ω µ 1 ...µp is the antisymmetric tensor field. In particular, the partial derivative is defined as one-form d = dx µ ∂ µ . In 4D it is convenient to use a frame-like multispinor formalism where all the Lorentz objects have local totally symmetric dotted and undotted spinorial indices (see a description of irreducible representations of 4D Lorentz group in terms of dotted and undotted spinors e.g. in [7]). To simplify the expressions we will use the condensed notations for them such that e.g.
We also always assume that spinor indices denoted by the same letters and placed on the same level are symmetrized, e.g.
We work in the AdS 4 space which is described by pair 1-forms: background frame e αα which enters explicitly in all constructions and background Lorentz connections ω αβ , ωαβ which are hidden in the 1-form covariant derivative The covariant derivative satisfies the following normalization conditions: where E αβ , Eαβ are basis elements of 2-form spaces and defined below as the double product of e αα . Basis elements of 1, 2, 3, 4-form spaces are: They are defined as follows: so the Hermitian conjugation laws look like (e αα ) † = e αα , (E α(2) ) † = Eα (2) , We also write some useful relations for these basis elements e αβ ∧ e ββ = 2E αβ , e βα ∧ e ββ = 2Eαβ The spinor indices are raised and lowered with the help of the antisymmetric Lorentz invariant tensors ǫ αβ , ǫαβ and inverse ǫ αβ , ǫαβ respectively. All the products of p -forms are understood in the sense of wedge-products. Henceforth the sign of wedge product ∧ will be omitted.
Massless higher spin models
In this section we provide all necessary description of the massless bosonic and fermionic higher spin fields as well as massless higher spin supermultiplets in the frame-like multispinor formalism used in this work. In what follows they will serve as building blocks for our construction for massive supermultiplets.
Integer spin k
In the frame-like formulation a massless spin-k field (k ≥ 2) is described by the physical one-form f α(k−1)α(k−1) and the auxiliary one-forms Ω α(k)α(k−2) , Ω α(k−2)α(k) , being the higher spin generalization of the frame and Lorentz connection in the frame-like formulation of gravity. Locally they are two-component multispinors symmetric on their dotted and undotted spinorial indices separately. These fields satisfy the following reality condition The free Lagrangian (a four-form in our formalism) for the massless bosonic field in the four-dimensional AdS 4 space looks like this: Here and henceforth h.c. means hermitian conjugate terms defined by rules (2.1). This Lagrangian is invariant under the following gauge transformations δΩ α(k)α(k−2) = Dη α(k),α(k−2) + e βα ζ α(k)βα(k−3) + λe αβ ξ α(k−1)α(k−2)β δΩ α(k−2)α(k) = Dη α(k−2),α(k) + e αβ ζ α(k−3)α(k)β + λe βα ξ α(k−2)βα(k−1) where zero-forms ξ α(k−1)α(k−1) and η α(k),α(k−2) are the gauge parameters for the gauge fields f α(k−1)α(k−1) and Ω α(k),α(k−2) . The additional gauge parameter ζ α(k+1)α(k−3) leads to the introduction of the so-called extra field Υ α(k+1)α(k−3) +h.c. which in turn requires introduction of next extra symmetries and so on. The procedure stops at These extra gauge fields do not enter the free Lagrangian but play an important role in non-linear higher spin theory. One of the nice features of the frame-like formulation is that for all fields (physical, auxiliary and extra ones) one can construct a gauge invariant two-form (curvature) generalizing curvature and torsion for gravity. For the physical and auxiliary fields they have the form In our construction for the massless and massive supermultiplets we use only the physical and auxiliary fields working in the so-called 1 and 1/2 order formalism which is very well known in supergravity. Namely, we do not consider any variations of the auxiliary fields but all calculations are done using the "zero torsion condition": At the same time the variation of the Lagrangian (2.2) under the arbitrary variations of the physical fields can be written in the following simple form One can see that in (2.6) and (2.7), the curvatures R enter in such combinations that extra gauge field Υ is dropped out, therefore below they will be omitted.
Half-integer spin k + 1/2
In the frame-like formulation, the massless spin-(k+1/2) field (k ≥ 1) is described by physical As in the bosonic case, these two-component multispinors are symmetric on their dotted and undotted spinorial indices separately and satisfy the reality condition The free Lagrangian for such fields in AdS 4 space looks like this: and is invariant under the following gauge transformation where d k−1 = ± λ 2 Note that the transformations with the gauge parameters η α(k+1)α(k−2) , η α(k−2)α(k+1) lead to the introduction of extra fields that play the same role as in the bosonic case and do not enter the free Lagrangian. Up to these extra fields the gauge invariant curvatures for the physical fermionic fields have the following form The variation of the free Lagrangian (2.8) under the arbitrary variations of the physical fields can be written as Here the curvature F also enters in such a combination with the frame e that the extra fields drop out. In the following part of this section we combine massless higher spin fermionic and bosonic fields in the N = 1 supermultiplet and construct an explicit form of the supertransformations leaving the sum of the free Lagrangians invariant.
Supermultiplet (k + 1/2, k)
This supermultiplet contains higher half-integer spin k + 1/2 and integer spin k. They are described by (Φ α(k)α(k−1) , h.c.) and (f α(k−1)α(k−1) , Ω α(k)α(k−2) , h.c.) respectively. We choose an ansatz for the supertransformations in the following form (as it was already mentioned, we consider supertransformations for the physical fields only): were we assume that the coefficients α k , β k , γ k are complex. The parameters of the supertransformation ζ α , ζα in AdS 4 satisfy the relations Using the expressions for Lagrangian variations (2.7) and (2.11) as well as on-shell identity (2.6) the variation for the sum of the bosonic and fermionic Lagrangians can be written as follows: Note that the invariance of the Lagrangian under the supertransformations can be achieved up to the total derivative only and this leads to a number of useful identities. For example, let us consider Using the explicit expressions for the bosonic (2.5) and fermionic (2.10) curvatures as well as relation (2.13) we obtain: Similarly, if one considers two relations: then using the explicit expression for the fermionic curvature as well as zero torsion condition one obtains: Using these identities we obtain from the requirement δ(L k + L k+ 1 2 ) = 0: The solution of the last relation depends on the sign of d k−1 = ± λ 2 . The parameter β k is real for the "+" sign and is imaginary for the "-". These two solutions correspond to the parityeven and parity-odd bosonic fields entering the supermultiplet. This fact will be important for the construction of massive supermultiplets where two bosonic fields must have opposite parities.
Supermultiplet
This supermultiplet contains higher integer spin k and half-integer spin k − 1/2. They are described by ( , h.c.) respectively. We choose an ansatz for the supertransformations in the following form Here in the most general case, coefficients α ′ k , β ′ k , γ ′ k are complex. Using the expressions for Lagrangian variations (2.7) and (2.11) as well as on-shell relation (2.6) we get As in previous case from two relations one can derive two identities: Then the invariance of the Lagrangian under the supertransformations requires that As in the previous case, we see from last relation that parameter β ′ k can be real or imaginary. It depends on the sign of d k−1 = ± λ 2 and is related to the parity of the fields entering the supermultiplet.
Massive higher spin fields
In this section we provide frame-like gauge invariant formulation for massive arbitrary integer and half-inter spins [38] but with the multispinor formalism used for all local indices.
Integer spin s
In the gauge invariant formalism a massive integer spin-s field is described by a set of massless fields with spins 0 ≤ k ≤ s. Frame-like formulation of massless bosonic fields with spins k ≥ 2 were considered above, they are described by one-forms (f α(k−1)α(k−1) , Ω α(k)α(k−2) +h.c.) while massless spin-1 is described by the physical one-form A and auxiliary zero-forms B α(2) , Bα (2) , and massless spin-0 is described by physical zero-form ϕ and auxiliary zero-form π αα .
The gauge invariant Lagrangian for the massive bosonic field has the form: Here L kinetic is just the sum of kinetic terms for all fields, that for k ≥ 2 were defined in (2.2), L mas is the sum of the mass terms for them, while L cross contains cross-terms gluing all these fields together. In what follows we assume that all parameters a k , a 0 ,ã 0 are positive. Explicit form of the coefficients (3.2) are determined by the invariance of the Lagrangian (3.1) under the following gauge transformations δf α(k−1)α(k−1) = Dξ α(k−1)α(k−1) + e βα η α(k−1)βα(k−2) + e αβ η α(k−2)α(k−1)β Compared to the massless case in the previous section, one can see that we still have all the gauge symmetries that our massless fields possessed modified so as to be consistent with the structure of the massive Lagrangian. Such gauge invariant formulation of the massive theory in (A)dS 4 space possesses some remarkable features. Firstly, we can consider a flat limit λ → 0 and immediately obtain the description of the massive fields in Minkowski space. Secondly, in anti-de Sitter space when λ 2 > 0 there is a correct massless limit m → 0 without the gap in the number of physical degrees of freedom. In such a limit our system decomposes into two systems describing the massless spin-s and the massive spin-(s − 1) fields. Lastly, in de Sitter space when λ 2 < 0 one can consider the so-called partially massless limits a k → 0.
In such a limit, the system decomposes into the two subsystems describing the partially massless spin-s field and the massive spin-k field. As in the massless case, to construct a complete set of the gauge invariant objects one has to introduce a lot of extra fields which do not, however, enter the free Lagrangian. In the following, we restrict ourselves to the curvatures for the physical and auxiliary fields only. With the explicit expressions for the gauge transformations at our disposal (3.3), it is rather straightforward to obtain (we omit all terms with the extra fields): In our construction of the massive supermultiplets we will consider supertransformations for the physical fields only. However, in all calculations we will heavily use the auxiliary fields equations (on-shell conditions) as well as corresponding algebraic identities: The variation of the Lagrangian (3.1) under the arbitrary variations for the physical fields takes the simple form Let us stress once again that this expression is such that all extra fields drop out.
Half-integer spin s + 1/2
In the gauge invariant formalism, the massive half-integer spin-s + 1/2 field is described by the set of massless fields with spins 1/2 ≤ k + 1/2 ≤ s + 1/2. Frame-like formulation for the massless fermionic fields with spins (k ≥ 1) were considered above, they are described by the one-forms (Φ α(k)α(k−1) , h.c.), while massless spin-1/2 is described by a physical zero-form (φ α , h.c.). The Lagrangian for free massive field in AdS 4 have the form In the following, we assume that the parameters c k , c 0 , M 1 are positive. Explicit form of the coefficients (3.8) are determined by the invariance of the Lagrangian under the following gauge transformations δΦ α(k)α(k−1) = Dξ α(k)α(k−1) + e βα η α(k)βα(k−2) + 2d k e αβ ξ α(k−1)α(k−1)β +c k+1 e ββ ξ α(k)βα(k−1)β + c k (k − 1)(k + 1) e αα ξ α(k−1)α(k−2) (3.9) The general structure of the Lagrangian (3.7) is the same as in the bosonic case. The first line is the sum of kinetic terms, the second line contains cross-terms and the last two lines are mass terms. In such a formulation we can take the correct massless limit m 1 → 0 in AdS (λ 2 > 0) and the correct partially massless limits c k → 0 in dS (λ 2 < 0). Taking a flat limit λ → 0 we obtain the description of the massive fermionic fields in Minkowski space. As in the bosonic case, we restrict ourselves with the gauge invariant curvatures for the physical fields only omitting all the extra fields: The variation of the Lagrangian (3.7) under the arbitrary variations of the physical fields has the following form
Massive higher spin superblocks
There are two types of massive N = 1 supermultiplets, each one containing two massive bosonic fields (with opposite parities) and two massive fermionic ones: To provide an explicit realization of such supermultiplets one has to find supertransformations connecting each bosonic field with each fermionic field so that: 1) the sum of the four free Lagrangians for these fields is invariant; 2) the algebra of the supertransformations is closed. In this work we use the following strategy. Firstly, for each pair of bosonic and fermionic fields (we call it superblock in what follows) we find the supertransformations leaving the sum of their two Lagrangians invariant. Then we combine all four fields together and adjust parameters of these superblocks so that the algebra of the supertransformations is closed. One can see from the diagrams above that there are only two non-trivial superblocks, namely (s, s + 1/2) and (s − 1/2, s). Such a strategy therefore greatly simplifies the whole construction.
In the gauge invariant formalism the description of massive higher spin fields is constructed out of the appropriately chosen set of massless ones. It seems natural to expect that one can construct a description of massive higher spin supermultiplet out of an appropriately chosen set of massless ones. Indeed, if one decomposes all four massive fields into their massless components, the resulting spectrum of massless components does correspond to some set of massless supermultiplets. However, the explicit structure of the supertransformations (see below) shows that all massless components still remain connected with all their neighbours so that the whole system looks just like one big massless supermultiplet (similarly to what we obtained in the three dimensional case [40]): One can introduce new fermionic variables: and adjust mixing angles Θ k so that the whole system decomposes into the sum of massless supermultiplets containing two bosonic and two fermionic fields: The separation of these supermultiplets into the usual pairs is impossible because the bosonic fields have opposite parities. However in this case the structure of cross and mass-like terms in the fermionic Lagrangian cease to be diagonal making the construction of massive supermultiplets more complicated. Note that it is this approach that was used in the previous works of one of the current authors [23].
Supertransformations
We begin with a general discussion valid for the construction of both massive superblocks and consider the most general ansatz for the supertransformations. For the bosonic field variables we choose and for the fermionic ones where all coefficients are complex. One can see that the supertransformations for higher spin components are combinations of the massless supertransformations (2.12) and (2.16).
The ansatz for the supertransformations (4.1), (4.2) has the same form for both massive superblocks (s+1/2, s) and (s, s−1/2), the only difference being in the boundary conditions. In the first case we have while in the second case The variation of the sum of the bosonic and fermionic Lagrangians (3.6), (3.11) under the supertransformations (4.1), (4.2) has the form δL + δL ′ , where Here we used the equations for the auxiliary bosonic fields (3.5). We now proceed as in the massless case, deriving the corresponding identities. Let us recall a general scheme. Lagrangian variation (4.5) has the structure where Φ, F are sets of all fields and curvatures for the fermion, f is a set of physical fields for the boson and Ω, R are sets of auxiliary fields and curvatures for the boson. Since a Lagrangian is defined up to a total derivative we have two type of identities D[ΦΩζ] = 0, D[Φf ζ] = 0. They lead to Using the explicit form of identities (4.6), (4.7) (see Appendix A for details), we obtain expressions for the parameters α and γ in terms of β: and and we also obtain recurrent equations on the parameters β k 2(k + 1)β k−1 c k+1 = kβ k a k+1 , as well as four independent equations which relate β and β ′ and the bosonic and fermionic mass parameters: The explicit solution of these equations depends on the concrete massive superblock. More spefically, it depends on the initial conditions (4.3), (4.4) and on the sign of d k , i.e. on the sign before massive terms in Lagrangian for fermions. In the following we present exact solutions for two massive superblocks (s + 1/2, s) and (s, s − 1/2, ).
Superblock (s + 1/2, s)
Here we present our results for the massive superblock (s + 1/2, s). The massive boson spin-s with the mass parameter M, as described in section 3.1, we have The massive fermion spin-(s + 1/2) with mass parameter M 1 as described in section 3.2, here Supertransformations for massive superblock (s + 1/2, s) have the form (4.1), (4.2) with the initial conditions (4.3). The parameters α k and γ k are determined by (4.8) and (4.9). From the equation (4.14) one can obtain an important relation on the bosonic and fermionic mass parameters. Indeed, at k = s we have where the sign corresponds to that of d k . So we have four independent cases The solution of other equations give, for Therefore, we see that in the two cases the parameters β are real and in the other two they are imaginary. This means that one half of the solutions corresponds to the massive superblocks with the parity-even boson while another half corresponds to the massive superblocks with the parity-odd one.
In order to present these four cases for the massive superblock (s, s + 1/2) in a more clear form let us introduce following notations. We denote integer spin s with the mass parameter M as [s] ± M (4. 16) here ± corresponds to parity-even/parity-odd boson. We also denote half-integer spin s+1/2 with the mass parameter M 1 as here ± corresponds to the sign of d k . In these notations the four solutions for the massive superblock given above look as follows: In the first and third cases we have here the upper sign corresponds to 1) and the lower sign corresponds to 3). In the second and fourth cases we have here the upper sign corresponds to 2) and the lower sign corresponds to 4).
Superblock (s, s − 1/2)
Here we collect our results for the massive superblock (s, s − 1/2). For the massive even or odd spin-s boson with the mass parameter M we use the same formulation as in the previous subsection, while for the massive spin-(s − 1/2) fermion with the mass parameter M 2 we use its description in section 3.2 with the shift s → (s − 1): Supertransformations for the massive superblock (s − 1/2, s) are the same as in the previous case (4.1), (4.2) but with different initial conditions (4.4). The parameters α k and γ k are still determined by (4.8) and (4.9). From the equation (4.13) one can relate bosonic and fermionic mass parameters, indeed at k = s we have here the sign corresponds to that of d k . So we again have four independent cases
Massive higher spin supermultiplets
In the previous section we constructed massive superblocks containing one massive fermion and one massive boson. For each individual superblock, we found supertransformations defined up to a one common parameter ρ. In this section we use these results to construct complete massive supermultiplets. For that we choose appropriate solutions for each superblock and adjust their parameters so that the algebra of these supertransformations is closed. In the next subsection, we consider general properties of such construction and then present our results for the case of integer and half-integer superspins.
General construction
Any massive N = 1 supermultiplet contains two massive fermions and two massive bosons.
In the notations given in the previous section (4.16), (4.17) they have the following structure As already mentioned, the two bosonic fields must have opposite parities and it appears that the two fermionic fields must have opposite signs of the mass terms. Let us introduce notations (f + , Ω + ) for the parity-even boson and (f − , Ω − ) for the parity-odd one. The fermions we denote as Φ + , Φ − according to the sign of d k .
The ansatz for the supertransformations is a combination of four possible superblocks corresponding to the lines with the parameters ρ 1,2,3,4 . For example, for the parity-even boson we take: (and similarly for the lower spin components), while the ansatz for the parity-odd one can be obtained by replacement ρ 1 → ρ 3 and ρ 2 → ρ 4 .
The commutator of the two supertransformations must produce a combination of translations and Lorentz transformations: Firstly, we note that four superblocks give the following relations on the mass parameters: Their solution is All the conditions for the closure of the superalgebra are fulfilled provided: If the relations (5.3) are satisfied then the commutators of the supertransformations on parity-even spin-s f + and parity-odd spin-(s − 1) f − fields have the same form: where a k is determined by (3.2) for spin s and spin (s − 1) respectively and
Integer superspin S
The massive superspin-s supermultiplet contains First of all, we note that the four superblocks give the following relations on the mass parameters Their solution is The requirement that the superalgebra be closed again leads to If the relations (5.5) are satisfied then commutators of supertransformations on parityeven and parity-odd bosonic spin-s fields have the same form: where a k is determined by (3.2) for spin s and
Summary
In this paper we have developed the component Lagrangian description of massive on-shell N = 1 supermultiplets with arbitrary (half)integer superspin in four dimensional Anti de Sitter space (AdS 4 ). The derivation is based on supersymmetric generalization of frame-like gauge invariant formulation of massive higher spin fields where massive supermultiplets are described by an appropriate set of massless ones. We show that N = 1 massive supermultiplets can be constructed as a combination of four massive superblocks, each containing one massive boson and one massive fermion. As a result, we have derived both the supertransformations for the components of the one-shell supermultiplets and the corresponding invariant Lagrangians. Thus, the component Lagrangian formulation of the N = 1 supersymmetric free massive higher spin field theory on the AdS 4 space can be considered complete. Let us briefly discuss the possible further generalizations of the results obtained. As we already pointed out, the problem of off-shell supersymmetric massive higher spin theory remains open in general. Only a few examples of such a theory with concrete superspins [34], [35], [36] in flat space have been developed. There are no known examples in the AdS space. There are two possible approaches to study this general problem. One option is to start with on-shell theory and try to find the necessary auxiliary fields closing the superalgebra on the base of Noether's procedure. Another approach can be based on the use of the superfield techniques from the very beginning. At present, realization of both these approaches seems unclear and will require the development of new methods. Besides, the interesting generalizations of the results obtained can be constructing the partially massless N = 1 supermultiplets and finding at least on-shell component Lagrangian description for Nextended massive supermultiplets in flat and AdS spaces. We hope to attack these problems in the forthcoming works. | 7,641.2 | 2019-01-28T00:00:00.000 | [
"Physics"
] |
Optime L2-Control Problem In Coefficients For A Linea Elliptic Equation. II. Approximation Of Solutions And Optimality Conditions
In this paper we study we study a Dirichlet optimal control prob- lem associated with a linear elliptic equation the coefficients of which we take as controls in the class of integrable functions. The characteristic feature of this control object is the fact that the skew-symmetric part of matrix-valued control A(x) belongs to L2-space (rather than Linfinty). In spite of the fact that the equations of this type can exhibit non-uniqueness of weak solutions, the corresponding OCP, under rather general assumptions on the class of admissi- ble controls, is well-posed and admits a nonempty set of solutions [9]. However, the optimal solutions to such problem may have a singular character. We show that some of optimal solutions can be attainable by solutions of special optimal control problems in perforated domains with fictitious boundary controls on the holes.
In this paper we deal with the following optimal control problem (OCP) in coefficients for a linear elliptic equation (1) where (A sym , A skew ) ∈ L ∞ (Ω; R N ×N ) × L 2 (Ω; R N ×N ) are respectively the symmetric and antisymmetric part of the control A, y d ∈ L 2 (Ω) and f ∈ H −1 (Ω) are given distributions, and A Ad denotes the class of admissible controls which will be precised later.
The characteristic feature of this problem is the fact that the skew-symmetric part of matrix A(x) belongs to L 2 -space (rather than L ∞ ).As a result, the existence and uniqueness of the weak solutions to the corresponding boundary value problem (1) are usually drastically different from the properties of solutions to the elliptic equations with L ∞ -matrices in coefficients.In most of the cases, the situation can deeply change for the matrices A with unremovable singularity.As a rule, some of the weak solutions can be attained by the weak solutions to the similar boundary value problems with L ∞ -approximated matrix A. However, this type does not exhaust all weak solutions to the above problem.There is another type of weak solutions called non-variational [20,22], singular [3,13,14,19], pathological [16,17] and others.As for the optimal control problem (1) we have the following result [9] (see [8] for comparison): for any approximation {A * k } k∈N of the matrix A * ∈ L 2 Ω; S N skew with properties {A * k } k∈N ⊂ L ∞ (Ω; S N skew ) and A * k → A * strongly in L 2 (Ω; S N skew ), optimal solutions to the corresponding regularized OCPs associated with matrices A * k always lead in the limit as k → ∞ to some admissible (but not optimal in general) solution ( A, y ) of the original OCP (1).Moreover, this limit pair can depend on the choice of the approximative sequence {A * k } k∈N .However, as follows from counter-example, given in [9], it is possible a situation when none of optimal solutions to OCP (1) can be attainable in such way.Therefore, the aim of this paper is to discuss a scheme of approximation for OCP (1) in order to attain the other types of optimal solutions, and derive the first order optimality system to this problem.
In order to illustrate the difficulties on the approximations of the OCPs due to the possible existence of variational and non-variational solutions, we present some numerical simulations in section 5.
In section 3 we give a precise description of the class of admissible controls A ad ⊂ L 2 Ω; R N ×N which guarantee that non-variational solutions can be attained through the sequence of optimal solutions to OCPs in special perforated domains with fictitious boundary controls on the boundary of holes.Namely, we consider the following family of regularized OCPs (2) subject to the constraints − div A sym ∇y + A skew ∇y = f in Ω ε , y = 0 on ∂Ω, ∂y/∂ν A = v on Γ ε , y ∈ H 1 0 (Ω ε ; ∂Ω), where Ω ε is the subset of Ω such that ∂Ω ⊂ ∂Ω ε , σ > 0, and A(x) S N := max i,j=1,...,N |a ij (x)| ≤ ε −1 a.e. in Ω ε .Here, v stands for the fictitious control.
We show that OCP (2) has a nonempty set of solutions (A 0 ε , v 0 ε , y 0 ε ) for every ε > 0.Moreover, as follows from (2) 1 , the cost functional I ε seems to be rather sensitive with respect to the fictitious controls.Due to this fact, we prove that the sequence (A 0 ε , y 0 ε ) ε>0 gives in the limit an optimal solution (A 0 , y 0 ) to the original problem.
The main technical difficulty, which is related with the study of the asymptotic behaviour of OCPs (2) as ε → 0, deals with the identification of the limit of two weakly convergent sequences.Due to the special properties of the skew-symmetric parts of admissible controls A ∈ A ad ⊂ L 2 Ω; S N , we show that this limit can be recovered in an explicit form.We also show in this section that the energy equalities to the regularized boundary value problems can be specified by two extra terms which characterize the presence of the-called hidden singular energy coming from L 2 -properties of skew-symmetric components A skew of admissible controls.
In conclusion, in Section 4, we derive the optimality conditions for regularized OCPs (2) and show that the limit passage in optimality system for the regularized problems (2) as ε → 0 leads to the optimality system for the original OCP (1).
Let M N = S N sym ⊕ S N skew be the set of all N × N real matrices.Here, S N skew stands for the set of all skew-symmetric matrices C = [c ij ] N i,j=1 , whereas S N sym is the set of all N × N symmetric matrices.
Let L 2 (Ω) = L 2 Ω; S N skew be the normed space of measurable squareintegrable functions whose values are skew-symmetric matrices.By analogy, we can define the space L 2 (Ω) = L 2 Ω; S N sym .Let A(x) and B(x) be given matrices such that A, B ∈ L 2 (Ω; S N skew ).We say that these matrices are related by the binary relation on the set L 2 (Ω; S N skew ) (in symbols, A(x) B(x) a.e. in Ω), if Here, L N (E) denotes the N -dimensional Lebesgue measure of E ⊂ R N defined on the completed borelian σ-algebra.
We define the divergence div A of a matrix A ∈ L 2 Ω; M N as a vector-valued distribution d ∈ H −1 (Ω; R N ) by the following rule where a i stands for the i-th row of the matrix A.
For fixed two constants α and β such that 0 < α ≤ β < +∞, we define M N be an arbitrary matrix.In view of the representation A = A sym + A skew , we can associate with A the form ϕ(•, By analogy with [9], we introduce the following concept. Definition 1.1.We say that an element y ∈ H 1 0 (Ω) belongs to the set D(A) if ( 6) with some constant c depending only of y and A skew .
As a result, having set we see that the bilinear form [y, ϕ] A can be defined for all ϕ ∈ H 1 0 (Ω) using ( 6) and the standard rule (7) [ where {ϕ ε } ε>0 ⊂ C ∞ 0 (Ω) and ϕ ε → ϕ strongly in H 1 0 (Ω).Let ε be a small parameter, I ε : U ε × Y ε → R be a cost functional, Y ε be a space of states, and U ε be a space of controls.Let be a set of all admissible pairs linked by some state equation.We consider the following constrained minimization problem: Since the sequence of constrained minimization problems (8) lives in variable spaces U ε × Y ε , we assume that there exists a Banach space U × Y with respect to which a convergence in the scale of spaces {U ε × Y ε } ε>0 is defined (for the details, we refer to [12,21]).In the sequel, we use the following notation for this convergence (u ε , y ε ) In order to study the asymptotic behavior of a family of (CMP ε ), the passage to the limit in (8) as the small parameter ε tends to zero has to be realized.Following the scheme of the direct variational convergence [12], we adopt the following definition for the convergence of minimization problems in variable spaces.Definition 1.2.A problem inf (u,y)∈Ξ I(u, y) is the variational limit of the sequence (8) as ε → 0 in symbols, inf if and only if the following conditions are satisfied: (dd) For every (u, y) ∈ Ξ ⊂ U × Y there are a constant ε 0 > 0 and a sequence {(u ε , y ε )} ε>0 (called a Γ-realizing sequence) such that Theorem 1. 3 ([12]).Assume that the constrained minimization problem (12) inf is the variational limit of sequence (8) in the sense of Definition 1.2 and this problem has a nonempty set of solutions For every ε > 0, let (u 0 ε , y 0 ε ) ∈ Ξ ε be a minimizer of I ε on the corresponding set Ξ ε .If the sequence {(u 0 ε , y 0 ε )} ε>0 is relatively compact with respect to the µ-convergence in variable spaces U ε × Y ε , then there exists a pair (u 0 , y 0 ) ∈ Ξ opt 0 such that
Setting of the Optimal Control Problem
Let f ∈ H −1 (Ω) and y d ∈ L 2 (Ω) be given distributions.by choosing an appropriate control A ∈ L 2 (Ω; M N ).More precisely, we are concerned with the following OCP subject to the constraints A ∈ A ad .(18) To define the class of admissible controls A ad , , we introduce the following sets. where We say that a matrix A = A sym + A skew is an admissible control to the Dirichlet boundary value problem ( 16) We have the following result.
Proposition 1 ( [9]).The set A ad is nonempty, convex, and sequentially compact with respect to the strong topology of L 2 (Ω; M N ).
The distinguishing feature of optimal control problem (15)-( 18) is the fact that the matrix-valued control A ∈ A ad is merely measurable and belongs to the space L 2 Ω; M N (rather than the space of bounded matrices L ∞ Ω; M N ).The unboundedness of the skew-symmetric part of matrix A ∈ A ad can have a reflection in non-uniqueness of weak solutions to the corresponding boundary value problem.It means that there exists a matrix A ∈ L 2 Ω; M N such that the corresponding state y ∈ H 1 0 (Ω) may be not unique.
Definition 2.2.We say that (A, y) is an admissible pair to the OCP (15)- (18) if , and the pair (A, y) is related by the integral identity ( 23) We denote by Ξ the set of all admissible pairs for the OCP (15)- (18).Let τ be the topology on the set of admissible pairs Ξ ⊂ L 2 Ω; M N × H 1 0 (Ω) which we define as the product of the strong topology of L 2 Ω; M N and the weak topology of H 1 0 (Ω).We say that a pair (A 0 , y 0 ) ∈ L 2 Ω; M N × D(A 0 ) is optimal for problem (15)-( 18) if (A 0 , y 0 ) ∈ Ξ and I(A 0 , y 0 ) = inf (A,y)∈ Ξ
I(A, y).
As immediately follows from (7), every weak solution y ∈ D(A) to the problem ( 16)- (17) satisfies the energy equality (24) where the value [y, y] A may not of constant sign for all y ∈ D(A).Hence, the energy equality (24) does not allow us to derive a reasonable a priory estimate in H 1 0 -norm for the weak solutions (see [9]).
As was shown in [9], OCP (15)-( 18) is always regular, i.e.Ξ = ∅, and moreover, for each f ∈ H −1 (Ω) and y d ∈ L 2 (Ω), this problem admits at least one solution.However, the main point is that for any approximation {A * k } k∈N of the matrix skew ), optimal solutions to the corresponding regularized OCPs associated with matrices A * k always lead in the τ -limit as k → ∞ to some admissible (but not optimal in general) solution ( A, y ) of the original OCP (15)- (18).Moreover, this limit pair can depend on the choice of the approximative sequence {A * k } k∈N .However, as follows from counter-example, given in [9], it is possible a situation when none of optimal solutions to OCP (15)-( 18) can be attainable in such way.In particular, the main result of [9] says that if some optimal pair ( A, y ) ∈ L 2 (Ω; M N )× H 1 0 (Ω) to OCP (15)-( 18) is attainable through the above L ∞ -approximation of matrix A * , then this pair is related by energy equality (25) Hence, the question is what kind of approximation to OCP (15)-( 18) should be applied in order to attain the other types of optimal solutions which do not hold true the energy equality (25).
3. On approximation of non-variational solutions to OCP (15)-( 18) We begin this section with some auxiliary results and notions.Let A ∈ A ad be a fixed matrix and let L(A) be a subspace of H 1 0 (Ω) such that (26) i.e., L(A) is the set of all weak solutions of the homogeneous problem (27) − div A∇y = 0 in Ω, y = 0 on ∂Ω.
Let ε be a small parameter.Assume that the parameter ε varies within a strictly decreasing sequence of positive real numbers which converge to 0. Hereinafter in this section, for any subset E ⊂ Ω, we denote by |E| its N -dimensional Lebesgue measure L N (E).
For every ε > 0, let T ε : R → R be the truncation function defined by The following property of T ε is well known (see [10]).Let g ∈ L 2 (Ω) be an arbitrary function.Then we have: Let A * ∈ L 2 Ω; S N skew be a matrix mentioned in the control constraints (21).For a given sequence {ε > 0}, we define the cut-off operators T ε : for every ε > 0. We associate with such operators the following set of subdomains {Ω ε } ε>0 of Ω ( 30) Definition 3.1.We say that a matrix A * ∈ L 2 Ω; S N skew is of the F-type, if there exists a strictly decreasing sequence of positive real numbers {ε} converging to 0 such that the corresponding collection of sets {Ω ε } ε>0 , defined by (30), possesses the following properties: (i) Ω ε are open connected subsets of Ω with Lipschitz boundaries for which there exists a positive value δ > 0 such that where Γ ε = ∂Ω ε \ ∂Ω.(ii) The surface measure of the boundaries of holes Q ε = Ω \ Ω ε is small enough in the following sense: (32) (iii) For each matrix A ∈ L 2 (Ω; M N ) such that A skew A * a.e. in Ω, and for each element h ∈ D(A), there is a constant c = c(h) depending on h and independent of ε such that Thus, if A * is of the F-type, each of the sets Ω ε is locally located on one side of its Lipschitz boundary ∂Ω ε .Moreover, in this case the boundary ∂Ω ε can be divided into two parts Remark 1.As immediately follows from Definition 3.1, the sequence of perforated domains {Ω ε } ε>0 is monotonically expanding, i.e., Ω ε k ⊂ Ω ε k+1 for all ε k > ε k+1 , and perimeters of Q ε tend to zero as ε → 0.Moreover, because of the structure of subdomains Q ε (see (31)) and L 2 -property of the matrix A * , we have This entails the property: Remark 2. As follows from [4], F-property of the skew-symmetric matrix A * implies the so-called strong connectedness of the sets {Ω ε } ε>0 which means the existence of extension operators Remark 3. It is easy to see that in view of the conditions (1)-(ii) of Definition 3.1 and the Sobolev Trace Theorem [1], for all ε > 0 small enough, the inequality holds true with a constant C = C(Ω) independent of ε.
As a direct consequence of Definition 3.1, we have the following obvious result.
Proposition 2. Assume that A * ∈ L 2 Ω; S N skew is of the F-type.Let {Ω ε } ε>0 be a sequence of perforated domains of Ω given by (31), and let {χ Ωε } ε>0 be the corresponding sequence of characteristic functions.Then Definition 3.2.We say that a sequence (∇ (P ε y ε ) , ∇ϕ) R N dx and, hence, the weak limit in the sense of Definition 3.2 does not depend on the choice of extension operators P ε : Let us consider the following sequence of regularized OCPs associated with perforated domains Ω ε (38) inf where ) is considered as a fictitious control, and σ is a positive number such that Using the fact that A ∈ L ∞ (Ω ε ; M N ) for every ε > 0 and each A ∈ A ε ad , we arrive at the following obvious result.
In order to study the asymptotic behavior of the sequences of admissible solu- in the scale of variable spaces, we adopt the following concept.
Definition 3.4.We say that a sequence and sup We are now in a position to state the main result of this section.Theorem 3.5.Assume that the matrix A * ∈ L 2 Ω; S N skew is of the F-type.Let {Ω ε } ε>0 be a sequence of perforated subdomains of Ω associated with matrix A * .Let f ∈ H −1 (Ω) and y d ∈ L 2 (Ω) be given distributions.Then the original optimal control problem inf (A,y)∈Ξ I(A, y) , where the sequence as the parameter ε tends to zero.
Proof. Since each of the optimization problems inf
we have to show that in this case all conditions of Definition 1.2 hold true.To do so, we divide this proof into two steps.
Step 1.We show on this step that condition (dd) of Definition 1.2 holds true.Let (A, y) ∈ Ξ be an arbitrary admissible pair to the original OCP (15)- (18).We will indicate two cases.Case 1.The set L(A), defined in (26), is a singleton.It means that h ≡ 0 is a unique solution of homogeneous problem (27); Case 2. The set L(A) is not a singleton.So, we suppose that the set L(A) is a linear subspace of H 1 0 (Ω) and it contains at least one non-trivial element of D(A) ⊂ H 1 0 (Ω).We start with the Case 2. Let h ∈ D(A) be a element of the set L(A) such that h is a non-trivial solution of homogeneous problem (27).In the sequel, the choice of element h ∈ L(A) will be specified (see ( 65)).Then we construct a (Γ, 0)-realizing sequence is a sequence of admissible controls to the problems (38).Note that in this case the properties (43)-( 46) are obviously true for the sequence where distributions w ε are such that (50) sup ε>0 1 (jjj) y ε ∈ H 1 0 (Ω ε ; ∂Ω) ε>0 is the sequence of weak solutions to the corresponding boundary value problems Hence, due to the Lax-Milgram lemma and the superposition principle, the sequence y ε ∈ H 1 0 (Ω ε ; ∂Ω) ε>0 is defined in a unique way and for every ε > 0 we have the following decomposition y ε = y ε,1 + y ε,2 , where y ε,1 and y ε,2 are elements of H 1 0 (Ω ε ) such that (hereinafter, we suppose that the functions y ε of H 1 0 (Ω ε , ∂Ω) are extended by operators Then (53)-(54) lead us to the energy equalities By the initial assumptions, we have h ∈ L(A).Then the condition (iii) of Definition 3.1 implies that (for the details we refer to [11]) with some constant C(h) independent of ε.Hence, (57) sup ε>0 Thus, using the continuity of the embedding H 1 2 (Γ ε ) → L 2 (Γ ε ) and Sobolev Trace Theorem, we get As a result, we arrive at the following the a priori estimates Hence, the sequences y ε,1 ∈ H 1 0 (Ω ε ; ∂Ω) ε>0 and y ε,2 ∈ H 1 0 (Ω ε ; ∂Ω) ε>0 are weakly compact with respect to the weak convergence in variable spaces [21], i.e., we may assume that there exists a couple of functions y 1 and y 2 in Now we can pass to the limit in the integral identities (53)-(54) as ε → 0. Using (50), (62), (57), L 2 -property of A ∈ A ad , and the fact that χ Ωε f ε → f strongly in H −1 (Ω), we finally obtain for every ϕ ∈ C ∞ 0 (Ω).Hence, y 1 and y 2 are weak solutions to the boundary value problem ( 16)-( 17) and (27), respectively.Hence, y 2 ∈ L(A) and y 1 ∈ D(A) (see [9]).As a result, we arrive at the conclusion: the pair (A, y 1 + h) belongs to the set Ξ, for every h ∈ L(A).Since by the initial assumptions (A, y) ∈ Ξ, it follows that having set in (49) y in H 1 0 (Ω ε ; ∂Ω) as ε → 0. Therefore, in view of (66), (57), (50), we see that Thus, the property (10) holds true.It is worth to notice that in the Case 1, we can give the same conclusion, because we originally have h ≡ 0. Hence, the solutions to boundary value problems (63)-( 63) are unique and, therefore, we can claim that y = y 1 , y 2 = 0, and h = 0.
In view of this, we make use the following relations by ( 50) by ( 57) by ( 37) and (66) In order to obtain the convergence we apply the energy equality which comes from the condition (A, y) ∈ Ξ ( 70) , and make use of the following trick.It is easy to see that the integral identity for the weak solutions y ε to boundary value problems (40) can be represented in the so-called extended form where h * is an arbitrary element of L. Indeed, because of the equality we have an equivalent identity to the classical definition of the weak solutions of boundary value problem (40).
As follows from (57), (66), and the Sobolev Trace Theorem, the numerical sequences are bounded.Therefore, we can assume, passing to a subsequence if necessary, that there exists a value Since y ε y weakly in H 1 0 (Ω ε ; ∂Ω) and y ∈ D(A), it follows that there exists a sequence of smooth functions {ψ ε ∈ C ∞ 0 (Ω)} ε>0 such that ψ ε → y strongly in H 1 0 (Ω).Therefore, following the extension rule (7), we have Because of the initial assumptions, we can assume that the element So, due to this observation, we specify the choice of element h * ∈ L(A) as follows or, in other words, we aim to ensure the condition ξ 1 − ξ 2 − ξ 3 + [y, y] A = 0.As a result, we have: Having put ϕ = y ε and h * = h * in (71) and using the fact that Ω ∇y ε , A skew ∇y ε R N χ Ωε dx = 0, we arrive at the following energy equality for the boundary value problem ( 40) As a result, taking into account the properties (37), (66), (75), we can pass to the limit as ε → 0 in (76).This yields Hence, turning back to (67), we see that this relation is a direct consequence of (68) and (77).Thus, the sequence {(u ε , v ε , y ε ) ∈ Ξ ε } ε>0 , which is defined by ( 49) and (65), is Γ-realizing.The property (dd) is established.
Step 2. We prove the property (d) of Definition 1.
and the sequence of fictitious controls In view of Definition 3.
It is easy to see that the limit matrix A is an admissible control to OCP (15)-( 18), i.e.A ∈ A ad .Since the integral identity holds true for every k ∈ N, we can pass to the limit in (80) as k → ∞ using Definition 3.4 and the estimate v k , ϕ coming from inequality (48).Then proceeding as on the Step 1, it can easily be shown that the limit pair (A, y) is admissible to OCP (15)- (18).Hence, the condition (79) 1 is valid.As for the inequality (79) 2 , we see that by (37) and compactness of the embedding H 1 0 (Ω) → L 2 (Ω).In view of the properties (78) and ( 5 we can conclude that the sequence ( in L 2 (Ω; S N sym ).Hence, combining this fact with (78) 5 and (37), we finally obtain As a result, the lower semicontinuity of L 2 -norm with respect to the weak convergence, immediately leads us to the inequality lim inf Thus, in order to prove the inequality (79) 2 , it remains to combine relations (81), (82), and take into account the following estimate The proof is complete.
In conclusion of this section, we consider the variational properties of OCPs (38)-(40).To this end, we apply Theorem 1.3.
be a sequence of optimal solutions to regularized problems (38)-( 40), where χ Ωε f ε → f strongly in H −1 (Ω).Then there exists an optimal pair (A 0 , y 0 ) ∈ A ad to the original OCP (15)-( 18), which is attainable in the following sense
y). (85)
Proof.In order to show that this result is a direct consequence of Theorem 1.3, it is enough to establish the compactness property for the sequence of optimal solutions (A 0 ε , v 0 ε , y 0 ε ) ∈ Ξ ε ε>0 in the sense of Definition 3.4.Let h ∈ C ∞ 0 (Ω) be a non-zero function such that div (A sym ∇h + A * ∇h) ∈ L 2 (Ω), where we assume that In view of the initial assumptions and estimate (see [11] for the details) sup ε>0 1 ∂Ω) be a corresponding solution to boundary value problem (40).Then following (60), we come to the estimate where the constant C is also independent of ε.As a result, we get Since ε −σ H N −1 (Γ ε ) → 0 as ε → 0, it follows that the minimal values of the cost functional (39) bounded above uniformly with respect to ε.Thus, the sequence of optimal solutions (A 0 ε , v 0 ε , y 0 ε ) ε>0 to the problems (38 ) and, hence, in view of Proposition 1 , it is relatively compact with respect to the weak convergence in the sense of Definition 3.4.For the rest of proof, it remains to apply Theorem 1.3.
Remark 5. We note that variational properties of optimal solutions, given by Theorem 3.6, do not suffice to assert that the convergence of optimal states P ε (y 0 ε ) to y 0 is strong in H 1 0 (Ω).Indeed, the convergence (86) which comes from ( 84)-(85), does not imply the norm convergence in H 1 0 (Ω).At the same time, combining relation (86) with energy identities and Ω ∇y 0 , A 0 sym ∇y 0 rewritten for optimal solutions of the problems (51)-( 52) and ( 16)-( 17), respectively, we get (87) lim = −[y 0 , y 0 ] A 0 .It gives us another example of the product of two weakly convergent sequences that can be recovered in the limit in an explicit form.Moreover, this limit does not coincide with the product of their weak limits.
Our next remark deals with a motivation to put forward another concept of the weak solutions to the approximated boundary value problem (40) which can be viewed as a refinement of the integral identity (53).Definition 3.7.Let {Ω ε } ε>0 be a sequence of perforated subdomains of Ω associated with matrix A by the rule (30)-(31).We say that a function
Since for every A ∈ A ad and h ∈ D(A) the bilinear form [h, ϕ]
A can be extended by continuity (see (7)) onto the entire space H 1 0 (Ω), it follows that the integral identity (88) can be rewritten as follows Hence, using the skew-symmetry property of the matrix A skew ∈ L 2 Ω; S N skew and the fact that the set L(A) is closed with respect to the strong topology of H 1 0 (Ω), we conclude: for every ε > 0 there exist an element h ε in L(A) such that the relation (89) can be reduced to the following energy equality Thus, in contrast to the "typical" energy equality to the boundary value problem (40), relation (90) includes some extra term which coming from the singular energy of the boundary value problem ( 16)-( 17) that was originally hidden in approximated problem (40).However, in contrast to the similar functional effect for Hardy inequalities in bounded domains (see [18]), the term Ω ∇y ε , A sym ∇h ε R N dx + [h ε , y ε ] A is additive to the total energy, and, hence, its influence may correspond to the increasing or decreasing of the total energy and may even constitute the main part of it.
Optimality System for Regularized OCPs Associated with
Perforated Domains Ω ε and its Asymptotic Analysis As follows from Theorem 3.3, for each ε > 0 small enough, the optimal control problem inf (A,v,y)∈Ξε I ε (A, v, y) , where the cost functional 39)-( 40), is a well-posed controllable system.Hence, to deduce an optimality system for this problem, we make use of the following well-know result.
Theorem 4.1 (Ioffe and Tikhomirov [6,5]).Let Y , U , and V be Banach spaces, let J : Y × U → R be a cost functional, let F : Y × U → V be a mapping, and let U ∂ be a convex subset of the space U containing more than one point.Let ( u, y) ∈ U × Y be a solution to the problem For each u ∈ U ∂ , let the mapping y → J(u, y) and y → F (u, y) be continuously differentiable for y ∈ O( y), where O( y) is some neighbourhood of the point y, and let Im F y ( u, y) be closed and it has a finite codimension in V .In addition, for y ∈ O( y), let the function u → J(u, y) be convex, the functional J is Gâteausdifferentiable with respect to u at the point ( u, y), and the mapping u → F (u, y) is continuous from U to Y and affine, i.e., Then there exists a pair (λ, p) where the Lagrange functional L is defined by equality If Im F y ( u, y) = V , then it can be assumed that λ = 1 in (91)-(92).
For our further analysis, we set ) be the trace operator, i.e. γ 0 Γε is the extension by continuity of the restriction operator γ 0 Γε (u) = u Γε given for all u ∈ C ∞ 0 (R N ).We are now in a position to prove the following result.Theorem 4.2.For a given ε > 0, let ∂Ω) be an optimal solution to the regularized problems (38)-(40).Assume that the following condition holds true Then there exists an element p ε ∈ H 1 0 (Ω ε ; ∂Ω) such that the tuple satisfies the following system of relations where Remark 6.It is worth to notice that, in contrast to (103), relation (105) should be interpreted as an equality of L 2 -functions.It means that the description of boundary value problem ( 105)-( 106) in the sense of distributions takes other form, namely, where the component ∂y 0 ε /∂ν (A 0 ε ) skew is unknown a priori.Here, we have used the fact that (110) Proof.By Theorem 4.1, there exists a pair p = (p 1 , p 2 ) ) such that the Lagrange functional L satisfies relations (91)-(92).The direct computations show that, in view of (101), the condition (91) takes the form (here we have used the fact that Im F y ( u, y) = V ).As follows from (111) and (102), for h ∈ C ∞ 0 (Ω ε ), we have Due to equality (110) and the initial assumptions (102), relation (112 Thanks to Lipschitz properties of ∂Ω ε , we can conclude that (see, for instance, [15,4]) is valid.Then, combining this relation with ( 111)-( 112), we arrive at the following identity = 0, which is valid for all h ∈ H 2 (Ω ε ) ∩ H 1 0 (Ω ε ; ∂Ω) and all p = (p 1 , p 2 ) such that (115) As follows from (114), for each Taking into account the fact that the mapping is an epimorphism (see Theorem 1.1.4in [5]), from (117) it follows that (118) Thus, in view of ( 116) and (118), relation (114) takes the form Applying the same arguments as before, we finally conclude that (119) As a result, having gathered relations (112), (116), and (119), we arrive at the boundary value problem (105)-(106).Moreover, by the regularity of solutions to the problem (105)-(106), we have . In order to end of the proof of this theorem, it remains to show the validity of the relations (107)-( 108).With that in mind, we note that, in view of the structure (94)-(96), condition (92) takes the form (120) Here, we have used the fact that H 1 2 (Γ ε ) can be reduced to a Hilbert space with respect to an appropriate equivalent norm, and, hence, H − 1 2 (Γ ε ) is a dual Hilbert space as well (for the details we refer to Lions and Magenes [15, p.35]).
Remark 7. In view of the assumption (102), we make use of the following observation.Let {(A ε , v ε , y ε ) ∈ Ξ ε } ε>0 be a weakly convergent sequence in the sense of Definition 3.4.Since in this case y ε ∈ H 1 0 (Ω ε ; ∂Ω) ε>0 are the solutions to the boundary value problem (99)-(100) with A = A ε , and g = f ε ∈ L 2 (Ω), and w = v ε ∈ H − 1 2 (Γ ε ), it follows that the sequence div A ε ∇y ε χ Ωε ε>0 is obviously bounded in L 2 (Ω).However, because of the non-symmetry of L 2 -matrices {A ε } ε>0 , it does not imply the same property for the sequence div A skew ε ∇y ε χ Ωε ε>0 .In order to guarantee this property, we make use of the notion of divergence div A of a skew-symmetric matrix A ∈ L 2 Ω; S N skew .We define it as a vector-valued distribution d ∈ H −1 (Ω; R N ) following the rule where a i stands for the i-th column of the matrix A. As a result, we can give the following conclusion: if div A skew ε ∈ L ∞ (Ω; R N ) for all ε > 0 and the sequence div A skew ε ε>0 is uniformly bounded in L ∞ (Ω; R N ), then there exists a constant C > 0 independent of ε such that (123) sup Indeed, since ) for all ε > 0), it follows that this relation can be extended by continuity to the following one Hence, To deduce the estimate (123), it remains to refer to the boundedness of y ε in variable H 1 (Ω ε ; ∂Ω) (see Definition 3.4).
Our next intention is to provide an asymptotic analysis of the optimality system (103)-(108) as ε tends to zero.With that in mind, we assume the fulfilment of the following Hypotheses: (H1) For each admissible control A ∈ A ad the corresponding bilinear form [y, ϕ] A is continuous in the following sense: , and y, y ε ∈ D(A) for ε > 0 small enough.(H2) Let (A 0 ε , v 0 ε , y 0 ε , p ε ) e>0 be a sequence of tuples such that, for each ε > 0 the corresponding cortege (A 0 ε , v 0 ε , y 0 ε , p ε ) satisfies the optimality system (103)-(108).Then there exists a sequence of extension operators and element ψ ∈ H 1 0 (Ω) such that P ε (p ε ) → ψ strongly in H 1 0 (Ω) and ψ ∈ D(A * ).
Step 2. On this step we study the limit passage in inequality (108) as ε → 0. To this end, we rewrite it as follows (132) By Theorem 3.6 (see (85)), we have by the compactness of the embedding H 1 0 (Ω) → L 2 (Ω), and lim ε→0 ε Step 3. As for the term J ε 3 (A), we see that requires rather strong assumptions in the form of Hypotheses (H1)-H2).At the same time, the verification of these Hypotheses becomes trivial provided . This proves Hypothesis (H2).
Numerical simulations
The main issue of this section is to present numerical simulations that tend to ascertain our approaches developed above.We restrict ourselves to the case when Ω is the unit ball of R 2 or R 3 .
The numerical simulations have been conducted according three guidelines.For this we consider some matrix A d ∈ L 2 (Ω) N ×N and y d in H 1 0 (Ω), and set f = f d := − div (A d ∇y d ).
We focus on the following test case: with the uniform ellipticity condition on A sym given by (5).For this problem under view the algorithm used should allow to recover the pair (A d , y d ), because the minimum of (147) is clearly 0.
Once validated, we return to the original OCP (1), for which we consider singular y d and A d in two manners: we still consider A d , y d and f d with A d possibly singular at some point ξ of the unit ball Ω in R 2 or R 3 .We triangulate Ω by a triangulation τ such that no vertices of τ is ξ and such that no edges of τ contains ξ.
We proceed to the classical gradient algorithm.
In this case, we expect, but cannot prove, that the algorithm converges to a variational solution.Indeed, when projecting on the grid, due to our assumption, we cannot distinguish between singular and non singular data.Moreover, for each projected matrix A in the admissible set, the projected matrix gives rise to a unique solution, thus the projected problem changes in its behavior.And of course as already said, due to the non-singular situation, we are led to think that the sequence of approximate solutions constructed will give rise to a variational solution.
In the final simulation procedure, we have punctured our domain and discretized the OCP given in (2).Accordingly, there is now no singularity in the punctured domain.We, afterwards, consider refining the punctured domain by reducing the size of the hole.
In the following sections we describe more precisely each scheme and present some numerical results with some interpretations in each case that, we do think, clarifies the situation.5.1.Validation.Throughout this section and the following ones, we will take A d of the following form: In the 2d-case (149) For the case of the unpunctured domain, the gradient G test := ∇ A J test is obtained by using the adjoint state p (see, for instance, (120) and further).
Let p be the solution of where W ∈ L 2 (Ω) N ×N .We adopt a finite element method for y and p such that A is constant for each triangular element of the mesh.In order for the algorithm to be more efficient, we use more data than these discrete components of A. We set n different pairs (u i d , f i d ), i = 1, n .To reduce the value of n, we choose to use a spatial smoothing for each component of G test .In order to do so, several options are possible ( [23], [24]).The new cost functional modified according to these n tests is now (with y i solution of the state equation (148) for f equals f i with i = 1, n): (153) J(A, {y i , i = 1, n}; {y i d , i = 1, n}) = 1 n i=1,n J i (A, y i ; y i d ), where The gradient becomes hereafter a mean of terms obtained in (152).
For the two-dimensional case, we use 16 pairs (y i d , f i d ) associated to a combination of sinusoidal functions useful to capture sufficient information.Each state y i d verify the state problem with f equal to f i d and A equal to the reference A d (Figure 1).The coefficient ε 0 is equal to 10 6 .The initial matrix A is by its coefficients (A 11 (x), A 12 (x); A 21 (x), A 22 (x)) = (1, 0.2; 0.1, 1.1).
The results (Figure 2) show a coherent convergence.
For the three-dimensional case, the simulation durations prevent to use the same level of discretization than for the two-dimensional cases.We use 48 pairs (y i d , f i d ) associated to a combination of sinusoidal functions equivalent to the 2D-cases.We use 11929 points and 72946 cells for the mesh (without hole).So we work on 72946 variables for each component of A.
The number of pairs (y i d , f i d ) and the smoothing are useful and allow us to control all theses variables, but with difficulties.We must parallelize our control problem.The n pairs (y i d , f i d ) create n different state problems, each of them can be computed on different core.We use this characteristic to reduce to a few days the simulation duration.We test our three-dimensional program with a singular asymmetric A d .The results are shown on Figures 3 and 4. The results are as consistent as for the 2D-problem.5.2.Discretization in the unpunctured domain.We return to the original OCP (1).We use the same pairs (y i d , f i d ), i = 1, n but now the real A d should be considered as unknown that is to say that we now consider a real optimization problem, while the preceding test cases, could be considered as an identification or inverse problem.(without the trick with puncturing of the singularity region).We use these results to compare with the next results associated to the OCP (2).5.3.Discretization in the punctured domain.At this step we consider the approximation of the original OCP in the form of (2).In this case, we must add the p to the adjoint p state solution of (151) for each pairs (u i d , f i d ) where Ω is replaced by Ω ε and (154) p = − q ε σ on Γ ε , where q satisfies (denoting B ε :) q − ∆q = 0 in B ε , (here Ω = Ω ε B ε ) ∂q ∂ν = v on Γ ε .
We have then (156) v = q H 1 (Bε) For the two-dimensional case, the pictures 7, 8 show the results.The second case uses a smaller hole.For the three-dimensional case, the figure 9 shows the results.We can note that the values of the functional is always smaller than the cases without hole.For the second 2D case with a smaller hole, the components become more different than these obtained with the OCP (1).
Of course these results do not validate the existence of variational and nonvariational solutions.However, according to Zhkov [private communication], or if we believe that the uniqueness and regularity results in [2] lead to the absence of non-variational solutions in dimension 2, the numerical simulations above tends to show that arguably this does exist in dimension 2. However, due to computational performance and refinement requirement, it is probably very difficult to ascertain that our numerical simulations do prove the prevalence of non-variational solutions or not to OCP (1) on the class of admissible controls A with unremovable singularity.
L 2
(Ω; S N sym ).Moreover, taking into account the norm convergence property lim k→∞
Figure 1 .
Figure 1.2D case -All y i d (left) and A d with a singular asymmetric component (denoted t ... in the picture).
Figure 3 .
Figure 3. 3D case: Components of A d (left) and the final control A (right) for the plane (0,Y,Z), A 11 , A 22 , A 33 (line 1), symmetric part (line 2) and asymmetric part (line 3) of A 12 , A 13 , A 23 with singular asymmetric components.
Figure 5 .
Figure 5. 2D case without hole: the components of A (left), J and G (right).
∇p ε , A skew ∇y ε R N dx = ∈ L ∞ (Ω; S N skew ).Hence, Hypothesis (H1) is valid.As for Hypothesis (H2), we see that admissible controls A ∈ A ad with extra property (146) form a close set with respect to the strong convergence in L 2 (Ω; S N skew ).Moreover, in this case we have that the sequence χ Ωε div A 0 (Ω) (seeRemark 7).Hence, the sequence of adjoint states {p ε } ε>0 , given by (105)-(106), is bounded in H 2 (Ω ε ) by the regularity of solutions to the problem (105)-(106).Hence, within a subsequence, we can suppose that the sequence {P ε * | 10,980.2 | 2015-10-29T00:00:00.000 | [
"Mathematics"
] |
SyReNN: A Tool for Analyzing Deep Neural Networks
Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains. Formally, DNNs are complicated vector-valued functions which come in a variety of sizes and applications. Unfortunately, modern DNNs have been shown to be vulnerable to a variety of attacks and buggy behavior. This has motivated recent work in formally analyzing the properties of such DNNs. This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation. The key insight is to decompose the DNN into linear functions. Our tool is designed for analyses using low-dimensional subsets of the input space, a unique design point in the space of DNN analysis tools. We describe the tool and the underlying theory, then evaluate its use and performance on three case studies: computing Integrated Gradients, visualizing a DNN’s decision boundaries, and patching a DNN.
Introduction
Deep Neural Networks (DNNs) [19] have become the state-of-the-art in a variety of applications including image recognition [54,34] and natural language processing [12].Moreover, they are increasingly used in safety-and security-critical applications such as autonomous vehicles [32] and medical diagnosis [10,39,29,38].These advances have been accelerated by improved hardware and algorithms.
DNNs (Section 2) are programs that compute a vector-valued function, i.e., from R n to R m .They are straight-line programs written as a concatenation of alternating linear and non-linear layers.The coefficients of the linear layers are learned from data via gradient descent during a training process.A number of different non-linear layers (called activation functions) are commonly used, including the rectified linear and maximum pooling functions.
Owing to the variety of application domains as well as deployment constraints, DNNs come in many different sizes.For instance, large image-recognition and natural-language processing models are trained and deployed using cloud resources [34,12], medium-size models could be trained in the cloud but deployed on hardware with limited resources [32], and finally small models could be trained and deployed directly on edge devices [48,9,23,35,36].There has also been a recent push to compress trained models to reduce their size [25].Such smaller models play an especially important role in privacy-critical applications, such as wake word detection for voice assistants, because they allow sensitive user data to stay on the user's own device instead of needing to be sent to a remote computer for processing.
Although DNNs are very popular, they are not perfect.One particularly concerning development is that modern DNNs have been shown to be extremely vulnerable to adversarial examples, inputs which are intentionally manipulated to appear unmodified to humans but become misclassified by the DNN [55,20,41,8].Similarly, fooling examples are inputs that look like random noise to humans, but are classified with high confidence by DNNs [42].Mistakes made by DNNs have led to loss of life [37,18] and wrongful arrests [27,28].For this reason, it is important to develop techniques for analyzing, understanding, and repairing DNNs.
This paper introduces SyReNN, a tool for understanding and analyzing DNNs.SyReNN implements state-of-the-art algorithms for computing precise symbolic representations of piecewise-linear DNNs (Section 3).Given an input subspace of a DNN, SyReNN computes a symbolic representation that decomposes the behavior of the DNN into finitely-many linear functions.SyReNN implements the one-dimensional analysis algorithm of Sotoudeh and Thakur [51] and extends it to the two-dimensional setting as described in Section 4.
Key insights.There are two key insights enabling this approach, first identified in Sotoudeh and Thakur [51].First, most popular DNN architectures today are piecewise-linear, meaning they can be precisely decomposed into finitelymany linear functions.This allows us to reduce their analysis to equivalent questions in linear algebra, one of the most well-understood fields of modern mathematics.Second, many applications only require analyzing the behavior of the DNN on a low-dimensional subset of the input space.Hence, whereas prior work has attempted to give up precision for efficiency in analyzing highdimensional input regions [49,50,17], our work has focused on algorithms that are both efficient and precise in analyzing lower-dimensional regions (Section 4).
Tool design.The SyReNN tool is designed to be easy to use and extend, as well as efficient (Section 5).The core of SyReNN is written as a highly-optimized, parallel C++ server using Intel TBB for parallelization [46] and Eigen for matrix operations [24].A user-friendly Python front-end interfaces with the PyTorch deep learning framework [45].
Use cases.We demonstrate the utility of SyReNN using three applications.The first computes Integrated Gradients (IG), a state-of-the-art measure used to determine which input dimensions (e.g., pixels for an image-recognition network) were most important in the final classification produced by the network (Section 6.1).The second precisely visualizes the decision boundaries of a DNN (Section 6.2).The last patches (repairs) a DNN to satisfy some desired specification involving infinitely-many points (Section 6.3).Thus, we believe that SyReNN is an interesting and useful tool in the toolbox for understanding and analyzing DNNs.Contributions.The contributions of this paper are: -A definition of symbolic representation of DNNs (Section 3).
-An efficient algorithm for computing symbolic representations for DNNs over low-dimensional input subspaces (Section 4).-A design of a usable and well-engineered tool implementing these ideas called SyReNN (Section 5).-Three applications of SyReNN (Section 6).
Preliminaries
We now formally define the notion of DNN we will use in this paper.
Our work is primarily concerned with the popular class of piecewise-linear DNNs, defined below.In this definition and the rest of this paper, we will use the term "polytope" to mean a convex and bounded polytope except where specified.Definition 2. A function f : R n → R m is piecewise-linear (PWL) if its input domain R n can be partitioned into finitely-many possibly-unbounded polytopes X 1 , X 2 , . . ., X k such that f Xi is linear for every X i .
The most common activation function used today is the ReLU function, a PWL activation function which is defined below.Definition 3. The rectified linear function (ReLU) is a function ReLU : R n → R m defined component-wise by where ReLU( v) i is the ith component of the vector ReLU( v) and v i is the ith component of the vector v.
In order to see that ReLU is PWL, we must show that its input domain R n can be partitioned such that, in each partition, ReLU is linear.In this case, we can use the orthants of R n as our partitioning: within each orthant, the signs of the components do not change hence ReLU is the linear function that just zeros out the negative components.
Although we focus on ReLU due to its popularity and expository power, SyReNN works with a number of other popular PWL layers include MaxPool, Leaky ReLU, Hard Tanh, Fully-Connected, and Convolutional layers, as defined in [19].PWL layers have become exceedingly common.In fact, nearly all of the state-of-the-art image recognition models bundled with Pytorch [44] are PWL.
The DNN's input-output behavior on the domain [−1, 2] is shown in Figure 1.
A Symbolic Representation of DNNs
We formalize the symbolic representation according to the following definition: Definition 4. Given a PWL function f : R n → R m and a bounded convex polytope X ⊆ R n , we define the symbolic representation of f on X, written f X , to be a finite set of polytopes f X = {P 1 , . . ., P n }, such that: 1.The set {P 1 , P 2 , . . ., P n } partitions X, except possibly for overlapping boundaries.2. Each P i is a bounded convex polytope.3. Within each P i , the function f Pi is linear.
Notably, if f is a DNN using only PWL layers, then f is PWL and so we can define f X .This symbolic representation allows one to reduce questions about the DNN f to questions about finitely-many linear functions F i .For example, because linear functions are convex, to verify that ∀x ∈ X. f (x) ∈ Y for some polytope Y , it suffices to verify ∀P i ∈ f X .∀v ∈ Vert(P i ).f ( v) ∈ Y , where Vert(P i ) is the (finite) set of vertices for the bounded convex polytope P i ; thus, here both of the quantifiers are over finite sets.The symbolic representation described above can be seen as a generalization of the ExactLine representation [51], which considered only one-dimensional restriction domains of interest.
Example 2. Consider again the DNN f : R 1 → R 1 given by 2].The input-output behavior of f on X is shown in Figure 1.From this, we can see that Within each of these partitions, the input-output behavior is linear, which for R 1 → R 1 we can see visually as just a line segment.As this set fully partitions X, then, this is a valid f X .
Computing the Symbolic Representation
This section presents an efficient algorithm for computing f X for a DNN f composed of PWL layers.To retain both scalability and precision, we will require the input region X be two-dimensional.This design choice is relatively unexplored in the neural-network analysis literature (most analyses strike a balance between precision and scalability, ignoring dimensionality).We show that, for two-dimensional X, we can use an efficient polytope representation to produce an algorithm that demonstrates good best-case and in-practice efficiency while retaining full precision.This algorithm represents a direct generalization of the approach of [51].The difficulties our algorithm addresses arise from three areas.First, when computing f X there may be exponentially many such partitions on all of R n but only a small number of them may intersect with X.Consequently, the algorithm needs to be able to find those partitions that intersect with X efficiently without explicitly listing all of the partitions on R n .Second, it is often more convenient to specify the partitioning via hyperplanes separating the partitions than explicit polytopes.For example, for the one-dimensional ReLU function we may simply state that the line x = 0 separates the two partitions, because ReLU is linear both in the region x ≤ 0 and x ≥ 0. Finally, neural networks are typically composed of sequences of linear and piecewise-linear layers, where the partitioning imposed by each layer individually may be well-understood but their composition is more complex.For example, identifying the linear partitions of y = ReLU(4 • ReLU(−3x − 1) + 2) is non-trivial, even though we know the linear partitions of each composed function individually.
Our algorithm only requires the user to specify the hyperplanes defining the partitioning for the activation function used in each layer; our current implementation comes with support for common PWL activation functions.For example, if a ReLU layer is used for an n-dimensional input vector, then the hyperplanes would be defined by the equations x 1 = 0, x 2 = 0, . . ., x n = 0.It then computes the symbolic representation for a single layer at a time, composing them sequentially to compute the symbolic representation across the entire network.
To allow such compositions of layers, instead of directly computing f X , we will define another primitive, denoted by the operator ⊗ and sometimes referred to as Extend, such that (1) and let I : x → x be the identity map.I is linear across its entire input space, and, thus, I X = {X}.By the definition of , where the final equality holds by the definition of the identity map I.We can then iteratively apply this procedure to inductively compute ( which is the required symbolic representation.
Algorithm for Extend
Algorithm 1 present an algorithm for computing Extend for arbitrary PWL functions, where Extend(h, g) = h ⊗ g = h • g.Geometric intuition for the algorithm.Consider the ReLU function (Definition 3).It can be shown that, within any orthant (i.e., when the signs of all coefficients are held constant), ReLU( x) is equivalent to some linear function, in particular the element-wise product of x with a vector that zeroes out the negative-signed components.However, for our algorithm, all we need to know is that the linear partitions of ReLU (in this case the orthants) are separated by hyperplanes x 1 = 0, x 2 = 0, . . ., x n = 0. Given a two-dimensional convex bounded polytope X, the execution of the algorithm for f = ReLU can be visualized as follows.We pick some vertex v of X, and begin traversing the boundary of the polytope in counter-clockwise order.If we hit an orthant boundary (corresponding to some hyperplane x i = 0), it implies that the behavior of the function behaves differently at the points of the polytope to one side of the boundary from those at the other side of the boundary.Thus, we partition X into X 1 and X 2 , where X 1 lies to one side of the hyperplane and X 2 lies to the other side.We recursively apply this procedure to X 1 and X 2 until the resulting polytopes all lie on exactly one side of every hyperplane (orthant boundary).But lying on exactly one side of every hyperplane (orthant boundary) implies each polytope lies entirely within a linear partition of the function (a single orthant), hence the application of the function on that polytope is linear, and hence we have our partitioning.
Functions used in algorithm.Given a two-dimensional bounded convex polytope X, Vert(X) returns a list of its vertices in counter-clockwise order, repeating the initial vertex at the end.Given a set of points X, ConvexHull(X) represents their convex hull (the smallest bounded polytope containing every point in X).Given a scalar value x, Sign(x) computes the sign of that value (i.e., −1 if x < 0, +1 if x > 0, and 0 if x = 0).
Algorithm description.
The key insight of the algorithm is to recursively partition the polytopes until such a partition lies entirely within a linear region of the function f .Algorithm 1 begins by constructing a queue containing the polytopes of g X .Each iteration either removes a polytope from the queue that lies entirely in one linear region (placing it in Y ), or splits (partitions) some polytope into two smaller polytopes that get put back into the queue.When we pop a polytope P from the queue, Line 6 iterates over all hyperplanes N k •x = b k defining the piecewise-linear partitioning of f , looking for any for which some vertex V i lies on the positive side of the hyperplane and another vertex V j lies on the negative side of the hyperplane.If none exist (Line 7), by convexity we are guaranteed that the entire polytope lies entirely on one side with respect to every hyperplane, meaning it lies entirely within a linear partition of f .Thus, we can add it to Y and continue.If two such vertices are found (starting Line 10), then we can find "extreme" i and j indices such that V i is the last vertex in a counter-clockwise traversal to lie on the same side of the hyperplane as V 1 and V j is the last vertex lying on the opposite side of the hyperplane.We then call SplitPlane() (Algorithm 2) to actually partition the polytope on opposite sides of the hyperplane, adding both to our worklist.
In the best case, each partition is in a single orthant: the algorithm never calls SplitPlane() at all -it merely iterates over all of the n input partitions, checks their v vertices, and appends to the resulting set (for a best-case complexity of O(nv)).In the worst case, it splits each polytope in the queue on each face, resulting in exponential time complexity.As we will show in Section 6, this exponential worst-case behavior is not encountered in practice, thus making SyReNN a practical tool for DNN analysis.
such that, within any partition imposed by the hyperplanes f is equivalent to some affine function.We then find that i on line 11 should be the last vertex on the first side of the hyperplane, while j should be the last vertex on the other side of the hyperplane.We will assume things are oriented so that i = v 1 and j = v 3 .Then SplitPlane is called, which adds new vertices p i = v 4 (shown in Figure 2b) where the edge v 1 → v 2 intersects the hyperplane, as well as p j = v 5 Algorithm 2: SplitPlane(V, g, i, j, N, b) Input: V , the vertices of the polytope in the input space of g.A function g. i is the index of the last vertex lying on the same side of the orthant face as V1.j is the index of the last vertex lying on the opposite side of the orthant face as V1.N and b define the hyperplane N • x = b to split on.Output: {P1, P2}, two sets of vertices whose convex hulls form a partitioning of V such that each lies on only one side of the N • x = b hyperplane.
where the edge v 3 → v 1 intersects the hyperplane.Separating all of the vertices on the left of the hyperplane from those on the right, we find that this has partitioned the original polytope into two sub-polytopes, each on exactly one side of the hyperplane, as desired.If there were more intersecting hyperplanes, we would then recurse on each of the newly-generated polytopes to further subdivide them by the other hyperplanes.
Representing Polytopes
We close this section with a discussion of implementation concerns when representing the convex polytopes that make up the partitioning of f X .In standard computational geometry, bounded polytopes can be represented in two equivalent forms: 1.The half-space or H-representation, which encodes the polytope as an intersection of finitely-many half-spaces.(Each half-space being defined as a halfspace defined by an affine inequality Ax ≤ b.) 2. The vertex or V-representation, which encodes the polytope as a set of finitely many points; the polytope is then taken to be the convex hull of the points (i.e., smallest convex shape containing all of the points).
Certain operations are more efficient when using one representation compared to the other.For example, finding the intersection of two polytopes in an Hrepresentation can be done in linear time by concatenating their representative half-spaces, but the same is not possible in V-representation.
There are two main operations on polytopes we need perform in our algorithms: (i) splitting a polytope with a hyperplane, and (ii) applying an affine map to all points in the polytope.In general, the first is more efficient in an H-representation, while the latter is more efficient in a V-representation.However, when restricted to two-dimensional polygons, the former is also efficient in a V-representation, as demonstrated by Algorithm 2, helping to motivate our use of the V-representation in our algorithm.
Furthermore, the two polytope representations have different resiliency to floating-point operations.In particular, H-representations for polytopes in R n are notoriously difficult to achieve high-precision with, because the error introduced from using floating point numbers gets arbitrarily large as one goes in a particular direction along any hyperplane face.Ideally, we would like the hyperplane to be most accurate in the region of the polytope itself, which corresponds to choosing the magnitude of the norm vector correctly.Unfortunately, to our knowledge, there is no efficient algorithm for computing the ideal floating point H-representation of a polytope, although libraries such as APRON [31] are able to provide reasonable results for low-dimensional spaces.However, because neural networks utilize extremely high-dimensional spaces (often hundreds or thousands of dimensions) and we wish to iteratively apply our analysis, we find that errors from using floating-point H-representations can quickly multiply and compound to become infeasible.By contrast, floating-point inaccuracies in a V-representation are directly interpretable as slightly misplacing the vertices of the polytope; no "localization" process is necessary to penalize inaccuracies close to the polytope more than those far away from it.
Another difference is in the space complexity of the representation.In general, H-representations can be more space-efficient for common shapes than Vrepresentations.However, when the polytope lies in a low-dimensional subspace of a larger space, the V-representation is usually significantly more efficient.
Thus, V-representations are a good choice for low-dimensionality polytopes embedded in high-dimensional space, which is exactly what we need for analyzing neural networks with two-dimensional restriction domains of interest.This is why we designed our algorithms to rely on Vert(X), so that they could be directly computed on a V-representation.
Extending to Higher-Dimensional Subsets of the Input Space
The 2D algorithm described above can be seen as implementing the recursive case of a more general, n-dimensional version of the algorithm that recurses on each of the (n − 1)-dimensional facets.In 2D, we trace the edges (1D faces) and use the 1D algorithm from [51] to subdivide them based on intersections with the hyperplanes defining the function.More generally, for an arbitrary n-dimensional polytope we can trace the (n − 1)-dimensional facets of the polytope, recursively applying the (n − 1)-dimensional variant of the algorithm to split those facets according to the linear partitions of the function.
We have experimented with such approaches, but found that the overhead of keeping track of all (n − k)-dimensional faces (commonly known as the face poset or combinatorial structure [16] of a polytope) was too large in higher dimensions.The two-dimensional algorithm addresses this concern by storing the combinatorial structure implicitly, representing 2D polytopes by their vertices in counter-clockwise order, from which edges correspond exactly to sequential vertices.To our knowledge, such a compact representation allowing arbitrary (n − k)-dimensional faces to be read off is not known for higher-dimensional polytopes.Nonetheless, we hope that extending our algorithms to GPUs and other massively-parallel hardware may improve performance to mitigate such overhead.
SyReNN tool
This section provides more details about the design and implementation of our tool, SyReNN (Symbolic Representations of Neural Networks), which computes f X , where f is a DNN using only piecewise-linear layers and X is a union of one-or two-dimensional polytopes.The tool is open-source; it is available under the MIT license at https://github.com/95616ARG/SyReNNand in the PyPI package pysyrenn.Input and output format.SyReNN supports reading DNNs from two standard formats: ERAN (a textual format used by the ERAN project [1]) as well as ONNX (an industry-standard format supporting a wide variety of different models) [43].Internally, the input DNN is described as an instance of the Network class, which is itself a list of sequential Layers.A number of layer types are provided by SyReNN, including FullyConnectedLayer, ConvolutionalLayer, and ReLULayer.To support more complicated DNN architectures, we have implemented a ConcatLayer, which represents a concatenation of the output of two different layers.The input region of interest, X, is defined as a polytope described by a list of its vertices in counter-clockwise order.The output of the tool is the symbolic representation f X .Overall Architecture.We designed SyReNN in a client-server architecture using gRPC [21] and protocol buffers [22] as a standard method of communication between the two.This architecture allows the bulk of the heavy computation to be done in efficient C++ code, while allowing user-friendly interfaces in a variety of languages.It also allows practitioners to run the server remotely on a more powerful machine if necessary.The C++ server implementation uses the Intel TBB library for parallelization.Our official front-end library is written in Python, and available as a package on PyPI so installation is as simple as pip install pysyrenn.The entire project can be built using the Bazel build system, which manages dependencies using checksums.Server Architecture.The major algorithms are implemented as a gRPC server written in C++.When a connection is first made, the server initializes the state with an empty DNN f (x) = x.During the session, three operations are permitted: (i) append a layer g so that the current session's DNN is updated from f 0 to f 1 (x) := g(f 0 (x)), (ii) compute f X for a one-dimensional X, or (iii) compute f X for a two-dimensional X.We have separate methods for one-and two-dimensional X, because the one-dimensional case has specific optimizations for controlling memory usage.The SegmentedLine and UPolytope types are used to represent one-and two-dimensional partitions of X, respectively.When operation (1) is performed, a new instance of the LayerTransformer class is initialized with the relevant parameters and added to a running vector of the current layers.When operation (2) is performed, a new queue of SegmentedLines is constructed, corresponding to X, and the before-allocated LayerTransformers are applied sequentially to compute f X .In this case, extra control is provided to automatically gauge memory usage and pause computation for portions of X until more memory is made available.Finally, when operation ( 3) is a performed, a new instance of UPolytope is initialized with the vertices of X and the LayerTransformers are again applied sequentially to compute f X .
Client Architecture.Our Python client exposes an interface for defining DNNs similar to the popular Sequential-Network Keras API [11].Objects represent individual layers in the network, and they can be combined sequentially into a Network instance.The key addition of our library is that this Network exposes methods for computing f X given a V-representation description of X.To do this, it invokes the server and passes a layer-by-layer description of f followed by the polytope X, then parses the response f X .
Extending to support different layer types.Different layer types and activation functions are supported by sub-classing the LayerTransformer class.Instances of LayerTransformer expose a method for computing Extend(h, •) for the corresponding layer h.To simplify implementation, two sub-classes of LayerTransformer are provided: one for entirely-linear layers (such as fullyconnected and convolutional layers), and one for piecewise-linear layers.For fully-linear layers, all that needs to be provided is a method computing the layer function itself.For piecewise-linear layers, two methods need to be provided: (1) computing the layer function itself, and (2) one describing the hyperplanes which separate the linear regions.The base class then directly implements Algorithm 1 for that layer.This architecture makes supporting new layers a straightforward process.
Float Safety.Like Reluplex [33], SyReNN uses floating-point arithmetic to compute f X efficiently.Unfortunately, this means that in some cases its results will not be entirely precise when compared to a real-valued or multiple-precision version of the algorithm.If a perfectly precise solution is required, the server code can be modified to use multiple-precision rationals instead of floats.Alternatively, a confirmation pass can be run using multiple-precision numbers after the initial float computation to confirm the accuracy of its results.The use of over-approximations may also be explored for ensuring correctness with floatingpoint evaluation, like in DeepPoly [50].Unfortunately, our algorithm does not directly lift to using such approximations, since they may blow the originally-2D region into a higher-dimensional (but very "flat") over-approximate polytope, preventing us from applying the 2D algorithm for the next layer.
Applications of SyReNN
This section presents the use of SyReNN in three example case studies.
Integrated Gradients
A common problem in the field of explainable machine learning is understanding why a DNN made the prediction it did.For example, given an image classified by a DNN as a 'cat,' why did the DNN decide it was a cat instead of, say, a dog?Were there particular pixels which were particularly important in deciding this?Integrated Gradients (IG) [53] is the state-of-the-art method for computing such model attributions.
Definition 5. Given a DNN f , the integrated gradients along dimension i for input x and baseline x is defined to be: The computed value IG i (x) determines relatively how important the ith input (e.g., pixel) was to the classification.However, exactly computing this integral requires a symbolic, closed form for the gradient of the network.Until [51], it was not known how to compute such a closed-form and so IGs were always only approximated using a samplingbased approach.Unfortunately, because it was unknown how to compute the true value, there was no way for practitioners to determine how accurate their approximations were.This is particularly concerning in fairness applications where an accurate attribution is exceedingly important.
In [51], it was recognized that, when X = ConvexHull({x, x }), f X can be used to exactly compute IG i (x).This is because within each partition of f X the gradient of the network is constant because it behaves as a linear function, and hence the integral can be written as the weighted sum of such finitelymany gradients.1Using our symbolic representation, the exact IG can thus be computed as follows: Where here y i , y i are the endpoints of the segment with y i closer to x and y i closest to x .
Implementation.The helper class IntegratedGradientsHelper is provided by our Python client library.It takes as input a DNN f and a set of (x, x ) input-baseline pairs and then computes IG for each pair.
Empirical Results.In [51] SyReNN was used to show conclusively that existing sampling-based methods were insufficient to adequately approximate the true IG.This realization led to changes in the official IG implementation to use the more-precise trapezoidal sampling method we argued for.
Visualization of DNN Decision Boundaries
Whereas IG helps understand why a DNN made a particular prediction about a single input point, another major task is visualizing the decision boundaries of a DNN on infinitely-many input points.Figure 3 shows a visualization of an ACAS Xu DNN [32] which takes as input the position of an airplane and an approaching attacker, then produces as output one of five advisories instructing the plane, such as "clear of conflict" or to move "weak left."Every point in the diagram represents the relative position of the approaching plane, while the color indicates the advisory.
One approach to such visualizations is to simply sample finitely-many points and extrapolate the behavior on the entire domain from those finitely-many points.However, this approach is imprecise and risks missing vital information because there is no way to know the correct sampling density to use to identify all important features.
Another approach is to use a tool such as DeepPoly [50] to over-approximate the output range of the DNN.However, because DeepPoly is an over-approximation, there may be regions of the input space for which it cannot state with confidence the decision made by the network.In fact, the approximations used by DeepPoly Table 1: Comparing the performance of DNN visualization using SyReNN versus DeepPoly for the ACAS Xu network [32].f X size is the number of partitions in the symbolic representation.SyReNN time is the time taken to compute f X using SyReNN.DeepPoly[k] time is the time taken to compute DeepPoly for approximating decision boundaries with k partitions.Each scenario represents a different two-dimensional slice of the input space; within each slice, the heading of the intruder relative to the ownship along with the speed of each involved plane is fixed.are extremely coarse.A naïve application of DeepPoly to this problem results in it being unable to make claims about any of the input space of interest.In order to utilize it, we must partition the space and run DeepPoly within each partition, which significantly slows down the analysis.Even when using 25 2 partitions, Figure 3b shows that most of the interesting region is still unclassifiable with DeepPoly (shown in white).Only when using 100 2 partitions is DeepPoly able to effectively approximate the decision boundaries, although it is still quite imprecise.
By contrast, f X can be used to exactly determine the decision boundaries on any 2D polytope subset of the input space, which can then be plotted.This is shown in Figure 3a.Furthermore, as shown in Table 1, the approach using f X is significantly faster than that using ERAN, even as we get the precise answer instead of an approximation.Such visualizations can be particularly helpful in identifying issues to be fixed using techniques such as those in Section 6.3.
Implementation. The helper class
PlanesClassifier is provided by our Python client library.It takes as input a DNN f and an input region X, then computes the decision boundaries of f on X. Timing Numbers.Timing comparisons are given in Table 1.We see that SyReNN is quite performant, and the exact SyReNN can be computed more quickly than even a mediocre approximation from DeepPoly using 55 2 partitions.Tests were performed on a dedicated Amazon EC2 c5.metal instance, using BenchExec [5] to limit the number of CPU cores to 16 and RAM to 16GB.
Patching of DNNs
We have now seen how SyReNN can be used to visualize the behavior of a DNN.This can be particularly useful for identifying buggy behavior.For example, in Figure 3a we can see that the decision boundary between "strong right" and "strong left" is not symmetrical.
The final application we consider for SyReNN is patching DNNs to correct undesired behavior.Patching is described formally in [52].Given an initial network N and a specification φ describing desired constraints on the input/output, the goal of patching is to find a small modification to the parameters of N producing a new DNN N that satisfies the constraints in φ.
The key theory behind DNN patching we will use was developed in [52].The key realization of that work is that, for a certain DNN architecture, correcting the network behavior on an infinite, 2D region X is exactly equivalent to correcting its behavior on the finitely-many vertices Vert(P i ) for each of the finitely-many P i ∈ f X .Hence, SyReNN plays a key role in enabling efficient DNN patching.
For this case study, we patched the same aircraft collision-avoidance DNN visualized in Section 6.2.We patched the DNN three times to correct three different buggy behaviors of the network: (i) remove "Pockets" of strong left/strong right in regions that are otherwise weak left/weak right; (ii) remove the "Bands" of weak-left advisory behind and to the left of the plane; and (iii) enforce "Symmetry" across the horizontal.The DNNs before and after patching with different specifications are shown in Figure 4.
Implementation The helper class NetPatcher is provided by our Python client library.It takes as input a DNN f and pairs of input region, output label X i , Y i , then computes a new DNN f which maps all points in each X i into Y i .Timing Numbers.As in Section 6.2, computing f X for use in patching took approximately 10 seconds.
Related Work
The related problem of exact reach set analysis for DNNs was investigated in [59].However, the authors use an algorithm that relies on explicitly enumerating all exponentially-many (2 n ) possible signs at each ReLU layer.By contrast, our algorithm adapts to the actual input polytopes, efficiently restricting its consideration to activations that are actually possible.
Hanin and Rolnick [26] prove theoretical properties about the cardinality of f X for ReLU networks, showing that | f X | is expected to grow polynomially with the number of nodes in the network for randomly-initialized networks.
Thrun [56] and Bastani et al. [4] extract symbolic rules meant to approximate DNNs, which can be thought of as an approximation of the symbolic representation f X .
In particular, the ERAN [1] tool and underlying DeepPoly [50] domain were designed to verify the non-existence of adversarial examples.Breutel et al. [6] presents an iterative refinement algorithm that computes an overapproximation of the weakest precondition as a polytope where the required output is also a polytope.
Scheibler et al. [47] verify the safety of a machine-learning controller using the SMT-solver iSAT3, but support small unrolling depths and basic safety properties.Zhu et al. [61] use a synthesis procedure to generate a safe deterministic program that can enforce safety conditions by monitoring the deployed DNN and preventing potentially unsafe actions.The presence of adversarial and fooling inputs for DNNs as well as applications of DNNs in safety-critical systems has led to efforts to verify and certify DNNs [3,33,14,30,17,7,58,50,2]. Approximate reachability analysis for neural networks safely overapproximates the set of possible outputs [17,59,60,58,13,57].
Prior work in the area of network patching focuses on enforcing constraints on the network during training.DiffAI [40] is an approach to train neural networks that are certifiably robust to adversarial perturbations.DL2 [15] allows for training and querying neural networks with logical constraints.
Conclusion and Future Work
We presented SyReNN, a tool for understanding and analyzing DNNs.Given a piecewise-linear network and a low-dimensional polytope subspace of the input subspace, SyReNN computes a symbolic representation that decomposes the behavior of the DNN into finitely-many linear functions.We showed how to efficiently compute this representation, and presented the design of the corresponding tool.We illustrated the utility of SyReNN on three applications: computing exact IG, visualizing the behavior of DNNs, and patching (repairing) DNNs.
In contrast to prior work, SyReNN explores a unique point in the design space of DNN analysis tools.In particular, instead of trading off precision of the analysis for efficiency, SyReNN focuses on analyzing DNN behavior on lowdimensional subspaces of the domain, for which we can provide both efficiency and precision.
We plan on extending SyReNN to make use of GPUs and other massivelyparallel hardware to more quickly compute f X for large f or X. Techniques to support input polytopes that are greater than two dimensional is also a ripe area of future work.We may also be able to take advantage of the fact that nonconvex polytopes can be represented efficiently in 2D.Extending algorithms for f X to handle architectures such as Recurrent Neural Networks (RNNs) will open up new application areas for SyReNN.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/),which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material.If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fig. 3 :
Fig. 3: Visualization of decision boundaries for the ACAS Xu network.Using SyReNN (left) quickly produces the exact decision boundaries.Using abstract interpretation-based tools like DeepPoly (middle and right) are slower and produce only imprecise approximations of the decision boundaries. | 8,870.4 | 2021-01-09T00:00:00.000 | [
"Computer Science"
] |
Theoretical investigation of the infrared spectrum of small polyynes †
The full cubic and semidiagonal quartic force fields of acetylene (C 2 H 2 ), diacetylene (C 4 H 2 ), triacetylene (C 6 H 2 ), and tetraacetylene (C 8 H 2 ) are determined using CCSD(T) (coupled cluster theory with single and double excitations and augmented by a perturbative treatment of triple excitations) in combination with the atomic natural orbital (ANO) basis sets. Application of second-order vibrational perturbation theory (VPT2) results in vibrational frequencies that agree well with the known fundamental and combination band experimental frequencies of acetylene, diacetylene, and triacetylene (average discrepancies are less than 10 cm (cid:2) 1 ). Furthermore, the predicted ground state rotational constants ( B 0 ) and vibration–rotation interaction constants ( a i ) are shown to be consistent with known experimental values. New vibrational frequencies and rotational parameters from the presented theoretical predictions are given for triacetylene and tetraacetylene, which can be used to aid laboratory and astronomical spectroscopic searches for characteristic transitions of these molecules. 51–54 The combination of these studies shows that current quantum chemical theory, particularly coupled cluster theory with single and double excitations and augmented by a perturbative treatment of triple excitations (CCSD(T)), 55 is able to accurately reproduce equilibrium geometries, experimental vibrational frequencies, vibration–rotation interaction constants ( a i ), and ground state rotational constants ( B 0 ).
Extensive theoretical and experimental studies have been carried out for acetylene and diacetylene in the past few decades, including high-resolution spectroscopic studies of all the fundamental bands and a significant number of the combination bands, 29,[40][41][42][43][44][45][46][47][48][49][50] and high level ab initio calculations that take into account anharmonic effects. [51][52][53][54] The combination of these studies shows that current quantum chemical theory, particularly coupled cluster theory with single and double excitations and augmented by a perturbative treatment of triple excitations (CCSD(T)), 55 is able to accurately reproduce equilibrium geometries, experimental vibrational frequencies, vibration-rotation interaction constants (a i ), and ground state rotational constants (B 0 ).
Triacetylene and tetraacetylene are not as thoroughly studied, notably in terms of rotational information. While all of the fundamental vibrational modes of triacetylene have been measured, there is only rotational information for the IR active fundamental modes, 56 and the strongest IR combination band (n 8 + n 11 ). [57][58][59][60][61] However, theoretical studies of triacetylene do give rotational information for the remaining modes from CCSD(T) calculations of the vibration-rotation interaction constants 62 and the equilibrium geometry. 63 In addition, the harmonic frequencies of triacetylene were calculated using partial fourth-order many-body perturbation theory [SDQ-MBPT(4)]. 63 Conversely though, to the authors' knowledge, there is almost no rotational information for tetraacetylene. There has been only one low-resolution spectroscopic study of tetraacetylene, which measured three of the fundamentals (n 6 , n 8 , and n 14 at 3329.4, 2023.3, and 621.5 cm À1 , respectively), and one combination band (n 10 + n 14 at 1229.7 cm À1 ), and gives an estimate for the electronic ground state rotational constant, B 0 . 64 Unfortunately, the theoretical knowledge of tetraacetylene is equally limited, with only two studies of the equilibrium geometry (at the Hartree-Fock 65 and B3LYP 66 level of theory), and a calculation of the harmonic vibrational frequencies at the SVWN level of theory. 64 While the two modes that are most useful for astronomical identification (n 14 and n 10 + n 14 ) were measured, the uncertainty associated with the line positions is too large to allow for an unambiguous assignment. Moreover, some high-resolution IR searches have been attempted, 50,61,62,67 but so far no transitions have been assigned to tetraacetylene.
In this paper, we report the ab initio calculations for acetylene, diacetylene, triacetylene, and tetraacetylene. Due to the centrosymmetric nature of these molecules, observations in the laboratory and in space are most easily accomplished through their infrared spectra. As such, the properties computed and presented here are those related to that technique: fundamental vibrational frequencies, ground state rotational constants, and intramolecular interactions. The computational approach is calibrated using the well studied acetylene and diacetylene, and then extended to make predictions for triacetylene and tetraacetylene.
Computational methods
All calculations were carried out at the CCSD(T) level of theory, which with a sufficiently large basis set has been shown to accurately reproduce experimental values of semi-rigid molecules. [52][53][54][55][68][69][70][71][72] Equilibrium geometries were determined using the large core-valence correlation-consistent quadruple-z basis set (cc-pCVQZ), which features [8s7p5d3f1g] (nonhydrogen atoms) and [4s3p2d1f] (hydrogen) of (15s9p5d3f1g) and (6s3p2d1f) primitive basis sets, respectively. [73][74][75] All electron (AE)-CCSD(T)/cc-pCVQZ has been shown to give very accurate equilibrium geometries for unsaturated hydrocarbons. 54,[76][77][78] Optimizations were done using analytic energy derivatives, 79 and were considered converged when the root-mean-square (RMS) gradient fell below 10 À10 au. However, it is well known that correlation-consistent basis sets, such as cc-pCVQZ, tend to underestimate the vibrational frequencies of symmetric bending modes (p g ) of conjugated molecules, e.g., polyynes, due to their susceptibility to an intramolecular variant of basis set superposition error (BSSE). 54,80 It has been shown that one way to avoid this problem is to use basis sets with a large number of Gaussian primitives (particularly f-type), such as the atomic natural orbital (ANO) basis set (with the primitive basis set (13s8p6d4f2g) for non-hydrogen atoms and (8s6p4d2f) for hydrogen). 52,81,82 The basis set has two common truncations: [4s3p2d1f] for nonhydrogen atoms and [4s2p1d] for hydrogen (hereafter known as ANO1), and [5s4p3d2f1g] (non-hydrogen atoms) and [4s3p2d1f] (hydrogen) (hereafter known as ANO2). 74,75,81 In addition, only the valence electrons of carbon are considered in the correlation treatment, i.e., standard frozen-core (fc) calculations. (fc)-CCSD(T)/ ANO1 has been shown to accurately reproduce experimental frequencies and intensities for small molecules. 52,83,84 Using the (fc)-CCSD(T)/ANO1 optimized geometry, second-order vibrational perturbation (VPT2) theory calculations were determined from full cubic and the semidiagonal part of the quartic force fields obtained by numerical differentiation of analytic CCSD(T) second derivatives. 70,85 All calculations were performed with the development version of the CFOUR program. 86 3 Results and discussion
Equilibrium structure
The AE-CCSD(T)/cc-pCVQZ equilibrium geometries are shown in Fig. 1, with comparison to experimentally derived values (in italics) when known. 53,87,88 The theoretical equilibrium bond lengths for acetylene, diacetylene, and triacetylene all agree within 0.5% of the structures determined from experimentally measured rotational constants. As the length of the carbon chain increases, the C-H bond lengths stay essentially the same, B1.062 Å, consistent with a sp-H type C-H bond. However, the CRC bond lengths increase (particularly the internal CRC bonds), while the C-C bond lengths decrease, becoming closer to that typical of CC double bonds. This suggests that the p electrons become more delocalized over the internuclear axis, and the polyyne's configuration moves from a strict triple-single bond alternation to more of a consecutive double bond character of the CC bonds, making the overall structure more rigid as C 2 units are added, an effect that also qualitatively acts to increase the biradical character of the molecule as the size grows.
The equilibrium rotational constants, B e , obtained from the AE-CCSD(T)/cc-pCVQZ equilibrium geometries are summarized in Table 1, and agree well with experimental ground state rotational constants (B 0 ). As such, the equilibrium rotational constants suggest that the calculations predict the correct ground state geometry, because for linear molecules with more than three atoms the summation of vibration-rotation interaction constants (a i ) is expected to be close to zero, and from In addition, as seen for other carbon chains (e.g., HC n , HC 2n+1 N, and H 2 C n ) 89 the centrifugal distortion constant (D e ) decreases with increasing molecular size, with a theoretical D e = 1.6 Â 10 À6 cm À1 for acetylene, D e = 1.5 Â 10 À8 cm À1 for diacetylene, D e = 8.6 Â 10 À10 cm À1 for triacetylene, and D e = 1.2 Â 10 À10 cm À1 for tetraacetylene. These values are consistent with those found experimentally for the respective vibrational ground states ( Table 1). As noted by Thaddeus et al. 89 this behavior of increasing stiffness with chain length is a distinguishing characteristic associated with bona fide chains.
Spectroscopic properties of acetylene and diacetylene
The quality of the present calculations is checked by comparison to the experimentally well studied acetylene and diacetylene. The harmonic and VPT2 fundamental frequencies of the fundamental and combination bands are given in Table 2 and 3 for acetylene and diacetylene, respectively, and experimental values are included for comparison. The (fc)-CCSD(T)/ANO1 VPT2 fundamental frequencies show good agreement with experimental values, with most observed-calculated deviations (o-c) being less than 5 cm À1 and all being less than 15 cm À1 .
Based on previous studies of acetylene 52 and diacetylene, 53 the use of the ANO2 basis set was evaluated compared to the ANO1 basis set. For some of the vibrational modes, such as the n 4 mode of acetylene [612.88 cm À1 (observed)], 42 Martin et al. 52 showed that CCSD(T)/ANO2 can give a slightly better agreement (o-c value of B2 cm À1 ) compared to the ANO1 basis set (o-c value of B12 cm À1 ). However, the study by Thorwirth et al. 53 showed that, for diacetylene, the average o-c value with CCSD(T)/ANO2 is comparable to that for the ANO1 basis set (B6 cm À1 and B4 cm À1 , respectively). Moreover, the time cost of (fc)-CCSD(T)/ANO2 calculations compared to (fc)-CCSD(T)/ ANO1 far outweighs the minor frequency differences, and does not justify the higher computational cost of the ANO2 basis set in predicting the fundamental frequencies of longer polyynes.
The (fc)-CCSD(T)/ANO1 anharmonicity constants (x ij , ESI †) also accurately account for the known combination bands of acetylene and diacetylene (Tables 2 and 3, respectively). All the combination bands are within 5 cm À1 of their observed values. For both acetylene and diacetylene, the ANO1 basis set is able to most accurately reproduce the C-H asymmetric stretch mode (n 3 and n 4 , respectively). Significant is the agreement between the experimental and our predicted frequencies of n 6 + n 8 [1241.060828 (38) cm À1 (observed) 46 and 1244.7 cm À1 (theoretical)], and 2n 6 + n 8 [1863.2512(5) cm À1 (observed) 44 and 1864.6 cm À1 (theoretical)] of diacetylene; both of which had only previously been calculated with CCSD(T)/cc-pCVQZ, and had o-c values greater than 20 cm À1 . 54 This suggests that the combination band VPT2 frequencies of polyynes determined using (fc)-CCSD(T)/ ANO1 are accurate to aid identification of molecules, such as in astronomical surveys.
The vibration-rotation interaction constants (Table 4) are also determined in the course of the VPT2 calculation, and are in good agreement with both previous theoretical studies 52,54 and experimentally determined values. 29,44,46,50,51,54 Based on the vibration-rotation interaction constants, the ground state rotational constants (B 0 ) were determined using the AE-CCSD(T)/ cc-pCVQZ determined B e values ( Table 1). For acetylene, B 0 = 1.175319 cm À1 , which is a 0.1% difference compared to the experimentally determined value of B 0 = 1.17664632 (18) cm À1 . 90 Diacetylene shows a similar 0.2% difference between the theoretical value of B 0 = 0.146167 cm À1 , and the experimentally determined value of B 0 = 0.1464123(17) cm À1 . 50 The consistent accuracy of these values suggests that the method presented is clearly good enough to be extrapolated to and aid high-resolution infrared spectroscopic searches for the larger polyynes.
Spectroscopic properties of triacetylene
The (fc)-CCSD(T)/ANO1 harmonic and VPT2 fundamental frequencies along with the experimental frequencies are given in Table 5. Comparison between theoretical VPT2 frequencies and experimental fundamentals measured with high-resolution techniques shows average o-c values that are smaller than those seen for acetylene or diacetylene (o-c B 2 cm À1 ). For the known combination band, the (fc)-CCSD(T)/ANO1 anharmonicity constants (x ij , ESI †) are able to reproduce the experimental value to within 5 cm À1 , suggesting other combination band frequencies are of equal accuracy.
For the modes observed in low-resolution studies (e.g., n 1 and n 12 ), the agreement is still good with o-c values less than 20 cm À1 . The notable exception is the internal CRC asymmetric stretch mode (n 7 ), which differs by 45 cm À1 . Since no rotationally resolved data can be found for this band, it is possible that the band observed at 1115.0 cm À1 59 was mis-assigned as the n 7 fundamental. A more likely assignment for this band is the n 9 + n 11 combination band, which has a predicted VPT2 frequency of 1107.1 cm À1 , a calculated intensity of 0.7 km mol À1 , and the same symmetry. Furthermore, the combination band is expected to be 3.5Â more intense than the n 7 fundamental at 0.2 km mol À1 , suggesting that n 9 + n 11 is more likely of the two to be observed. However, rotationally resolved measurements of this band are clearly needed to confirm this speculation.
We note that, a resonance between the n 5 fundamental and the n 2 + n 7 and 3n 7 combination bands must be addressed to achieve the very small (1 cm À1 ) o-c difference obtained for the C-H asymmetric stretch mode, n 5 . The vibrational frequencies as a result of resonant interactions are calculated by a deperturbation-diagonalization technique followed by transformation of the deperturbed transition moments, as discussed 85 This combination of Fermi and Darling-Dennison interactions shifts the n 5 predicted frequency from 3333.1 to 3329.5 cm À1 , which is able to reproduce the experimentally observed frequency [3329.0533(2) cm À1 61 ] with the same accuracy seen for diacetylene (o-c B 0.5 cm À1 ). The combination bands involved are similarly shifted: n 2 + n 7 from 3329.5 to 3362.2 cm À1 , and 3n 7 from 3526.7 to 3498.7 cm À1 . Since the shift is most pronounced for the two combination bands, future experimental work to observe either of these bands is required to confirm this prediction.
Spectroscopic properties of tetraacetylene
The (fc)-CCSD(T)/ANO1 harmonic and VPT2 frequencies of the fundamental and combination bands for tetraacetylene are given in Table 6, and the (fc)-CCSD(T)/ANO1 anharmonicity constants (x ij ) are given in the ESI. † For the four experimentally observed bands, agreement of the observed and calculated frequencies is good at 7 cm À1 , which is comparable to the uncertainty of the low resolution measurements. Furthermore, the ANO1 VPT2 frequencies are able to reproduce the experimental frequencies far better than the previous harmonic frequency calculations, which had o-c values of B20-100 cm À1 . 64 Of the predicted fundamental and combination bands, there are a number of bands that are found/predicted to have sufficient intensity and/or relatively unique frequency range that could offer viable target transitions to use to search for tetraacetylene in future laboratory or astronomical spectra. For example, in the IR the n 1 + n 6 at 6550.8 cm À1 or n 12 + n 15 at 871.9 cm À1 combination bands have both comparable predicted intensity to measured bands of di-and triacetylene, and have transitions in relatively clean regions of the spectrum. In terms of astronomical searches, the n 17 mode at 60.7 cm À1 , offers a unique target transition, since its low frequency makes it accessible by far-IR observations, similar to the n 2 bending mode of C 3 . 92 Based on the results discussed for the other small polyynes, the theoretical vibration-rotation interaction constants given in Table 4 are sufficient to assist in identification of ro-vibrational bands of tetraacetylene. The a i results in a theoretical ground state rotational constant of B 0 = 0.018844 cm À1 that agrees within errors with the experimentally determined value, B 0 = 0.020(3) cm À1 . 64 Overall, for polyynes the difference between the experimental and calculated rotational constants (DB 0 ) decreases from 0.001 to 0.00008 cm À1 as the chain length is increased, which is consistent with the trend seen for other carbon chain molecules (e.g., HC n N, HC n , C n O). 93 Therefore, if the trend continues as expected then the DB 0 for tetraacetylene is equal to or smaller than that seen for triacetylene, and the determined ground state rotational constant is a good approximation of the true value.
Conclusions
Accurate equilibrium geometries have been determined at the AE-CCSD(T)/cc-pCVQZ level of theory, and the full cubic and semidiagonal quartic force field have been determined at the (fc)-CCSD(T)/ANO1 level of theory for acetylene and the three smallest polyynes. No scaling or adjustments had to be included to match theoretical values with those determined by experiments. The resulting VPT2 fundamental vibrational frequencies and vibration-rotation interaction constants agree with known experimental values, showing about a 5 cm À1 deviation in frequencies for bands with high-resolution infrared information. For bands with only low-resolution data, the theoretical frequencies are able to confirm mode assignments or suggest a reassignment, as in the case of the observed band at 1115.0 cm À1 of triacetylene to the n 9 + n 11 combination band, which has previously been attributed to the n 7 fundamental. The provisional ab initio method used here is also able to accurately reproduce the observed frequencies of combination bands.
The calculated fundamental frequencies for triacetylene and tetraacetylene give insight as to why tetraacetylene has not yet been observed in space. Observation of centrosymmetric molecules in astronomical environments is mainly through infrared detection of the high intensity bending modes; e.g., n 8 [628.040776(36) cm À1 ] 29 64 and n 10 + n 14 [1229.7(5) cm À1 ], 64 and are predicted to be significantly weaker in intensity due to lower column densities. 34,35 Consequently, at these frequencies and resolutions of the previous infrared observations where polyynes were detected, [20][21][22]24,25 the transitions of tetraacetylene are blended with those of triacetylene. Other bands of tetraacetylene would be more suitable for identification, such as n 1 + n 6 , n 12 + n 15 , or n 17 that are expected to be equally strong as bands already used to identify di-and triacetylene.
Overall, the resulting computed geometries lead to equilibrium rotational constants (B e ), which when corrected for vibrational zero-point effects give ground state equilibrium constants (B 0 ) that agree with experimental values (0.2%).
Based on the small o-c values for acetylene, diacetylene, and triacetylene, we are confident that the fundamental frequencies and spectroscopic constants determined here offer an accurate guide for spectroscopic searches focused on detection of ro-vibrational bands of triacetylene and tetraacetylene. Such work is underway in our laboratory.
Conflicts of interest
There are no conflicts to declare. | 4,356.8 | 2018-02-21T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Soft modifications to jet fragmentation in high energy proton-proton collisions
The discovery of collectivity in proton-proton collisions, is one of the most puzzling outcomes from the two first runs at LHC, as it points to the possibility of creation of a Quark-Gluon Plasma, earlier believed to only be created in heavy ion collisions. One key observation from heavy ion collisions is still not observed in proton-proton, namely jet-quenching. In this letter it is shown how a model capable of describing soft collective features of proton-proton collisions, also predicts modifications to jet fragmentation properties. With this starting point, several new observables suited for the present and future hunt for jet quenching in small collision systems are proposed.
Introduction
One of the key open questions from Run 1 and Run 2 at the LHC, has been prompted by the observation of collective features in collisions of protons, namely the observation of a near-side ridge [1], as well as strangeness enhancement with multiplicity [2]. Similar features are, in collisions of heavy nuclei, taken as evidence for the emergence of a Quark-Gluon Plasma (QGP) phase, few fm after the collision.
The theoretical picture of collective effects in heavy ion collisions is vastly different from the picture known from protonproton (pp). Due to the very different geometry of the two system types, interactions in the final state of the collision become dominant in heavy ion collisions, while nearly absent in pp collisions. The geometry of heavy ion collisions is so different from pp collision that in fact even highly energetic jets suffer an energy loss traversing the medium, known as jet quenching.
The ATLAS experiment has recently shown that the ridge remains in events tagged with a Z-boson [3]. While maybe unsurprising by itself, the implication of this measurement is a solid proof that some collective behaviour exists in events where a high p ⊥ boson is produced, possibly with an accompanying jet. In this letter this observation is taken as a starting point to investigate how the same dynamics producing the ridge in Z-tagged collisions, may also affect jet fragmentation. To investigate this, the microscopic model for collectivity, based on interacting strings [4,5,6] is used. The model has been shown to reproduce the near side ridge in minimum bias pp, and has been implemented in the PYTHIA8 event generator [7], allowing one to study its influence also on events containing a Z and a hard jet.
The non-observation of jet quenching in pp and pPb collisions is, though maybe not surprising due to the vastly different geometry, one of the most puzzling features of small system collectivity. If collectivity in small systems is due to final state interactions, it should be possible to also measure its effect on jets. If, on the other hand, collectivity in small collision systems is not due to final state interactions, but mostly due to saturation effects in the initial state -as predicted by Color Glass Condensate calculations [8] -the non-observation of jet quenching will follow by construction. The continued search for jet quenching in small systems is therefore expected to be a highly prioritized venue for the upcoming high luminosity phase of LHC [9].
The microscopic model for collectivity
Most general features of pp collisions, such as particle multiplicities and jets, can be described by models based on string fragmentation [10,11]. The confining colour field between partons, is described as a massless relativistic string. In the original model, such strings have no transverse extension, and hadronize independently. The longitudinal kinematics of the i'th breaking is given by the Lund symmetric fragmentation function: where z is the fraction of the remaining available momentum taken away by the hadron. N is a normalization constant, and a and b are tunable parameters, relating the fragmentation kinematics to the breakup space-time points of the string, which are located around a hyperbola with a proper time of: Figure 1: (a) A sketch showing a high multiplicity pp collision in impact parameter space ( b) and rapidity (y), with several MPIs populating the collision with strings. The collision also features a Z boson and a jet. In a normal configuration (black), the hard part of the jet fragments outside the densely populated region. In the used toy geometry (red), the jet is forced to fragment inside the densely populated region. (b) The ratio z j = p ⊥, j /p ⊥,z with default Pythia 8 (red), Pythia 8 + shoving with normal event geometry (blue), and the toy event geometry (green).
where κ ∼ 1 GeV/fm is the string tension. The transverse dynamics is determined by the Schwinger result: where m ⊥ is the transverse mass of the quark or diquark produced in the string breaking 1 . When a qq pair moves apart, spanning a string between them, the string length is zero at time τ = 0. To obey causality, also its transverse size must be zero, allowing no interactions between strings for the first short time (< 1 fm/c) after the initial interaction. After this initial transverse expansion, strings may interact with each other, by exerting small transverse shoves on each other. In refs. [4,5] a model for this interaction was outlined, based on early considerations by Abramowski et al. [12]. Assuming that the energy in a string is dominated by a longitudinal colour-electric field, the transverse interaction force per unit string length is for two parallel strings, given by: where both d ⊥ (the transverse separation of the two strings), and ρ (the string transverse width) are time dependent quantities. The parameter g is a free parameter, which should not deviate too far from unity. Equation (2) gives an (average) upper limit for how long time the strings should be allowed to shove each other around, as the strings will eventually hadronize 2 . String 1 The formalism does dictate whether to use current or constituent quark masses. In Pythia the supression factors s/u and diquark/quark are therefore determined from data, with resulting quark masses providing a consistency check. 2 Eq. (2) is written up with a string in vacuum in mind. It might be possible that the string life time is modified in the dense environment of a heavy ion collision.
hadronization and the shoving model has been implemented in the Pythia 8 event generator, and all predictions in the following are generated using this implementation.
Effects on jet hadronization
We consider now a reasonably hard Z-boson, produced backto-back with a jet. Due to the large p ⊥ of the jet, its core will have escaped the transverse region in which shoving takes place well before it is affected. See figure 1 (a, left) in black for a sketch.
In the following simulations, this semi-realistic geometry is created by picking transverse coordinates for each MPI according to the convolution of the two proton mass distributions, which are assumed to be 2D Gaussians. In a heavy ion collision, the jet must still traverse through a densely populated region, due to the much larger geometry. In central Pb-Pb collisions, the observed effect by CMS [13], is that the z j distribution moves to the left. To investigate whether shoving can give a qualitatively similar signature, a set-up similar to that of the experiment, just for pp collisions at √ s = 7 TeV, is studied in the following. A Z-boson reconstructed from leptons with 80 GeV< M Z < 100 GeV, p ⊥ > 40 GeV is required, and the leptons are required each to have p ⊥ > 10 GeV. The leading anti-k ⊥ [14] jet (using FastJet [15] in Rivet [16]) is required to have p ⊥ > 80 GeV and ∆φ z j > 3π/4. We study three different situations, with the result given in figure 1 (b).
In red default Pythia 8 is shown, in blue Pythia 8 + shoving, with an event geometry as indicated above with the jet escaping, and in green Pythia 8 + shoving with a toy geometry. In the toy geometry, the jet is placed in origo, and strings are allowed to shove each other already from the initial interaction, thus violating causality. This has the effect that the strings of the underlying event are allowed to shove even the hardest fragment of the jet. The toy model is clearly not a realistic picture of a pp interaction, but is implemented in order to give an effect similar to what one would expect from a heavy ion collision, where the event geometry allows strings from other nucleon-nucleon sub-collisions, to interact also with the hardest jet fragment. The toy model is sketched in figure 1 (a, right) in red, compared to the normal, more realistic setup in black. In the normal setup, the strings are allowed to propagate for a finite time, indicating the time it takes for the strings to grow from infinitesimal transverse size, to their equilibrium size.
While shoving in a toy geometry produces an effect qualitatively similar to what one would expect from jet quenching, the effect in a realistic geometry is far too suppressed (comparing blue to red in figure 1 (b)). Several suggestions exist for overcoming this geometric suppression, prominently using jet substructure observables [17], or e.g. using a delayed signal from top decays [18] (in AA collisions). In the remaining paper another approach will be described. Instead of looking for deviations in the spectrum of a narrow jet compared to a "vacuum" expectation, we start from the wide-R (R 2 = ∆η 2 + ∆φ 2 ) part where collectivity in the form of a ridge is known to exists even in pp collisions. The same observable is then calculated as function of R, all the way to the core, where the soft modification is expected to vanish.
Near side ridge in Z-tagged events
The ridge, as recently measured by ATLAS in events with a Z boson present [3], provides an opportunity. The requirement of a Z boson makes the events in question very similar to the events studied above. The Z does not influence the effect of the shoving model, and in figure 2 we show high multiplicity events with and without shoving, with the appearance of a ridge in the latter -in accordance with the experimental results 3 .
It is instructive to discuss the result of figure 2 with the sketch in figure 1 (a) in mind. Since the ridge analysis requires a |∆η| gap of 2.5, the jet region is, by construction, cut away. (Keeping in mind that in this case there is no required jet trigger.) The underlying event does, however, continue through the central rapidity range, and even "under" the jet, a ridge should be visible, if only one could perform a true separation of jet particles from underlying event particles in an experiment. If that is not possible, it is reasonable to naively ask if the presence of a ridge in the underlying event will by itself give rise to a shift in z j . The result in figure 1 (b) (blue line) suggests that it does not.
Influence on jet observables
As the ATLAS measurement has established, there is indeed collectivity present in (high multiplicity) events with a Z present. In the previous section it was shown that the measured signature can be adequately described by the shoving model. Now the situation will be extended to include also a high-p ⊥ jet trigger in the same way as in section 3, and the effect of the collective behaviour on the jet will be discussed.
From equations (1) and (3) we see that two physical quantities are present in the hadronization model and its modification, namely the hadron p ⊥ and the mass 4 . These quantities need to be cast into observables that provide information about the full jet. This will be done in the following, and the effect of the shoving model examined. [ (6) to Pythia 8. Errors are fit errors (1σ), fits shown in figure 3.
Effect on jet-p ⊥ : The jet cross section
As there is little effect on the raw jet-p ⊥ spectra, the jet cross section is introduced: where p ⊥, j is the p ⊥ of the leading jet in the event, and p ⊥,0 is the imposed phase space cut-off. It was pointed out by Ellis et al. [19], that the R-dependence of σ j under the influence of MPIs in a pp collision, can be parametrized as A + B log(R) + CR 2 . Later Dasgupta et al. [20] noted that hadronization effects contributes like −1/R. This gives a total parametrization: By construction, the ridge effect from the previous chapter is far away from the jet in η, and therefore also in R. Any contribution from shoving can be reasonably expected to be most pronounced for large R. Equation (4) gives a contribution of where d ⊥ is density dependent. In the previously introduced semi-realistic geometry, we would therefore expect a contribution to σ j , which is ∝ R 2 , i.e. a correction to the parameter C in equation (6). In figure 3, σ j (R) is shown without MPIs and hadronization (red), with MPI, no hadronization (blue), Pythia 8 default (green) and Pythia 8 + shoving (black). The analysis setup is the same as in section 3. Results from the Monte Carlo is shown as crosses, and the resulting fits as dashed lines, with parameters given in table 1.
From the fits it is visible that shoving contributes to the R 2 dependence as expected. Directly from figure 3 it is visible that shoving contributes to the jet cross section at a level comparable to hadronization effects.
In order to use this procedure to set limits on jet quenching in small systems, comparison must be made to predictions. In figure 3 only LO predictions are given, but while NLO corrections are sizeable enough that figure 3 cannot be taken as a numerically accurate prediction, such corrections will not affect the relative change in σ j with and without shoving, and will not affect the result. More crucial is the effect of parton density uncertainties, which may affect σ j up to 10% for this process [21]. This points to the necessity of more precise determinations of PDFs, if microscopic non-perturbative effects on hard probes in pp collisions are to be fully understood.
Soft measures: Average hadron mass and charge
The hadrochemistry of the jet is here quantified in a quite inclusive manner by the average hadron mass: where N p is the number of hadrons in the jet, and m h are the individual hadron masses. Furthermore the total jet charge is studied: where q i are the individual hadron electric charges. As shoving only affects these quantities indirectly, the predicted effect is not as straight forward as was the case for jet cross section, but requires a full simulation to provide predictions. In figure 4 the average hadron mass in the leading jet (still in Z+jet collisions as above) is shown for two exemplary values of R. For small R, m h is unchanged, but as R grows, a significant change, on the order of 10% is visible. The Q j distribution for R = 0.3 jets is shown in figure 5. It is seen directly, that for this particular value of R, shoving widens the distribution, and also the mean is further shifted in the positive direction. The R-dependency of this behaviour is shown in figure 6. Here both the mean and the width of the jet distribution at different values of R is shown (note the different scales on the axes). It is seen that this observables shows deviations up to 40% in the large-R limit. Jet identification techniques to reveal whether the seed parton is a gluon or a quark [22] might be able to increase the discriminatory power even further.
It should be noted that the jet charge has been a challenge for fragmentation models since the days of e + e − collisions at LEP [23]. The renewed interest in fragmentation properties from the observation of collectivity in small systems provides a good opportunity to also go back and revisit older observations. The jet hadrochemistry can be studied in a more exclusive manner, by means of particle identification, similar to what is done in nuclear collisions. Such observables will also be largely affected by formation of colour multiplets, increasing the string tension [24,6]. Some studies of this effect in jets in pp collisions have been performed [25], but could require further attention to the important space-time structure, as described in section 3. Such detailed studies will be deferred to a future publication.
Conclusions
The non-observation of jet quenching in small systems is one of the key open questions to understand collective behaviour, similar to those in heavy ion collisions, in collisions of protons. For the coming high luminosity era at LHC, the search for new observables to either observe jet quenching, or provide quantitative exclusion limits is necessary. In this letter we have shown that the microscopic model for collectivity implemented in Pythia 8, can reproduce one observed collective feature already observed in pp collisions with a hard probe, namely the ridge in Z tagged events, as observed by ATLAS. Basic features like z j are, however, unaffected, but highly sensitive to the collision geometry. In this letter it was shown that, for a toy event geometry, the model produces features similar to those observed in Pb-Pb collisions by CMS. The toy geometry study highlights the need for a better motivated theoretical description of the space-time structure of the initial state. The realization that the complicated interplay between fragmentation time and spatial structure is significant for precision predictions, dates back to the 1980's for collisions of nuclei [26]. With the discovery of small system collectivity, several approaches have been developed also for pp collisions (e.g. [27,28,29]), most (but not all) aiming for a description of flow effects. It is crucial for the future efforts that such space-time models attempt at describing both soft and hard observables at once, in order to avoid "overtuning" of sensitive parameters. In this letter it was done by first describing the ridge in Z-tagged events, and then proceed to investigate jet observables with the same parameters.
The major contribution of this letter is the proposal of several new observables to understand the effects on jet fragmentation from the shoving model in Z+jet events. The main idea behind these observables is to go from the wide-R region (wide jets), where collective effects, in form of the ridge, is already observed, to the very core of the jet, where only little effect is expected. The jet-p ⊥ is only affected little, and the observed 5% effect on the integrated quantity σ j , will be difficult to observe when also taking into account uncertainties from PDFs an NLO corrections, but nevertheless provides a crucial challenge for the upcoming high luminosity experiments at LHC, where larger statistics can help constraining the theoretical uncertainties better. More promising are the effects observed on hadron properties inside the jet, where the average hadron mass shows a 10% deviation and jet charge even larger. Even if an effect this large is not observed in experiment, its non-observation will aide the understanding of soft collective effects better, as the shoving model predicting the effect, adequately describes the ridge in Z-tagged collisions.
Acknowledgements
I thank Johannes Bellm for valuable discussions, and Peter Christiansen, Leif Lönnblad and Gösta Gustafson for critical comments on the manuscript. I am grateful for the hospitality extended to me by the ALICE group at the Niels Bohr Institute during the preparation of this work. This work was funded in part by the Swedish Research Council, contract number 2017-0034, and in part by the MCnetITN3 H2020 Marie Curie Initial Training Network, contract 722104. | 4,627 | 2019-01-22T00:00:00.000 | [
"Physics"
] |
Supplementary material for: Hidden state models improve state-dependent diversification approaches, including biogeographical models
The state-dependent speciation and extinction (SSE) models have recently been criticized due to their high rates of "false positive" results. Many researchers have advocated avoiding SSE models in favor of other "nonparametric" or "semiparametric" approaches. The hidden Markov modeling (HMM) approach provides a partial solution to the issues of model adequacy detected with SSE models. The inclusion of "hidden states" can account for rate heterogeneity observed in empirical phylogenies and allows for reliable detection of state-dependent diversification or diversification shifts independent of the trait of interest. However, the adoption of HMM has been hampered by the interpretational challenges of what exactly a "hidden state" represents, which we clarify herein. We show that HMMs in combination with a model-averaging approach naturally account for hidden traits when examining the meaningful impact of a suspected "driver" of diversification. We also extend the HMM to the geographic state-dependent speciation and extinction (GeoSSE) model. We test the efficacy of our "GeoHiSSE" extension with both simulations and an empirical dataset. On the whole, we show that hidden states are a general framework that can distinguish heterogeneous effects of diversification attributed to a focal character.
List of
: Description of scenarios and parameter values used to simulate the data. Table S2: Additional 17 models used in empirical study of conifers. Table S3: Description of scenarios and parameter values used to simulate phylogenetic trees and range distributions under the GeoSSE+extirpation model. Figure S1: Scheme of the transition rates between rate classes (RC0 to RC4) used for the simulation scenarios with multiple rate classes (Sims B, C and D). Figure S2: Proportion of widespread lineages on trees simulated under scenarios E and F. Figure S3: Summary of model support for simulated scenarios A to D. Figure S4: Summary of model support for simulated scenarios E to H. Figure S5: Accuracy of turnover and extinction fraction estimates for simulations scenarios A to D. Figure S6: Accuracy of net diversification estimates for simulations scenarios A to D. Figure S7: Results for relative net diversification rates and Akaike model weights (AICw) for simulation scenarios B and C. Figure S8: Distribution of Akaike weights for the model set fitted to simulation scenarios ext_A to ext_D. Figure S9: Distribution of parameter values across 100 simulation replicates for each of the scenarios ext_A to ext_D.
Section 2 -Extended simulation results
Section 3 -Performance of GeoSSE+extirpation models Table S1: Description of scenarios and parameter values used to simulate the data. Scenarios A to E are instances of the original GeoSSE model and GeoHiSSE models with varying number of rate categories. Scenarios F to H are comprised of different models. Scenario F is a custom extension of the GeoSSE model allowing anagenetic transitions (i.e., jumps) between the endemic ranges. Scenario G has only two endemic areas A and B (see more information in Magnuson-Ford and Otto 2012). Scenario H is not a joint tree and trait model and follow similar procedures as used by Rabosky and Goldberg (2015). Figure S1: Scheme of the transition rates between rate classes (RC0 to RC4) used for the simulation scenarios with multiple rate classes (Sims B, C and D). Transitions between rate classes were modelled with the same rate (0.05) following a meristic Markov model. As a result, diversification rates vary following a gradient across the branches of the three. Figure S2: Proportion of widespread lineages on trees simulated under scenarios E and F. See Table S1 Figure S3: Summary of model support for simulated scenarios A to D. Plots show distribution of Akaike Information Criterion weights (AICw) for each model (columns) computed with 100 simulation replicates. Box-plots in red are area-dependent models and gray plots are area-independent models. A) Data simulated under the area-dependent GeoSSE model. B) and C) simulated phylogenies with three and five diversification shifts unrelated to geography, respectively. D) Simulation of area-dependent diversification GeoHiSSE model with two rate classes. Table 1 shows list and description of fitted models and Table S1 show details for each simulation scenario. Table S1). Table 1 shows list and description of fitted models and Table S1 show details for each simulation scenario. Light blue shades represent the running 5% and 95% quantiles computed for all simulation replicates using 100 cumulative bins equally spaced from the root towards the tips of the tree. Dark blue shades (not always visible) show the limits between the running 25% and 75% quantiles. Red lines show the median of parameter estimates. See Figure S5 for estimates of net diversification. Light blue shades represent the running 5% and 95% quantiles computed for all simulation replicates using 100 cumulative bins equally spaced from the root towards the tips of the tree. Dark blue shades (not always visible) show the limits between the running 25% and 75% quantiles. Red lines show the median of parameter estimates.
Section 1
Testing the effect of heterogeneous rates on GeoSSE models --Earlier studies showed that the Binary State-dependent Speciation and Extinction model (BiSSE) shows an undesirable behavior when faced with rate heterogeneity in diversification across the tree that is not associated with character states (Rabosky and Goldberg, 2015). Rabosky and Goldberg (2015) show that BiSSE has an issue both with respect to the frequency in which the trait-dependent models are selected when no such process is present and with misleading parameter estimates for such models. The explanation for this behavior is general enough and may apply to every State-dependent Speciation and Extinction model (SSE). When rates of diversification are heterogeneous across the phylogenetic tree, the original SSE models have no means to accommodate the shifts in rates other than set speciation and/or speciation associated with different states to differ across the branches of the tree (Beaulieu and O'Meara, 2016). Thus, here we evaluate the behavior of the original GeoSSE model to area-independent shift in diversification rates, in order to show evidence of a similar pattern.
For this we used the same datasets generated for the simulation scenarios B and C (Table S1), but we restricted the set to include only the homogeneous GeoSSE models (see Figure 3, top panel). We chose to include representatives of our expanded GeoSSE models (i.e., GeoSSE+extirpation and anagenetic GeoSSE) because these might be prone to the same issues when no hidden rate classes are included in the set of models. The model set is comprised by one area-dependent and one area-independent configuration of the original GeoSSE, the GeoSSE+extirpation, and the anagenetic GeoSSE models (i.e., models 1, 2, 7, 8, 19, and 20 described on Table 1). We fit each model using Maximum Likelihood to obtain parameter estimates, computed their Akaike model weight (AIC w ) and performed model averaging.
Results show that the distribution of parameter estimates averaged across all models in the set and pooled for all simulation replicates is centered in the true value for the simulated datasets ( Figure S6, top row). In other words, parameter estimates show no difference between rates of diversification associated with areas 0 or 1 . Akaike weights across all simulations are not overwhelmingly biased towards area-dependent models ( Figure S6, bottom row). For instance, in many of the simulation replicates there is substantial AICw for both area-independent and area-dependent models, meaning that parameter estimates averaged across these models include contributions from both equal rates and area dependent rates. These results contrast with previous descriptions of the problem associated with BiSSE models. Figure S7: Results for relative net diversification rates and Akaike model weights (AICw) for simulation scenarios B and C. Both scenarios show heterogeneous diversification rates not associated with the areas (Table S1). Top plots show the distribution of ratios between net diversification rates for areas 0 and 0 + 1 computed for each simulation replicate. Red dashed vertical lines represent the true value for the ratios whereas horizontal blue lines show the empirical 95% CI. Bottom plots show distribution of weights across all simulation replicates for each of the models in the set, see
Section 2
Extended simulation results --Here we performed two sets of simulations to test the behavior of the GeoSSE model including hidden states. The first set of simulations is composed of four different scenarios that test area-independent and area-dependent diversification with varying degrees of heterogeneity in rates of diversification (Scenarios A to D). The second set has another four simulation scenarios that explore the behavior of the model under extreme cases.
The first three scenarios (E to G) simulate cases of reduced frequency (or complete absence) of widespread lineages whereas the last scenario (H) tests the case in which ranges have evolved only due to anagenetic changes (no cladogenetic events). Table 1 shows the parameter values used to simulate trees and geographic range distributions observed at the tips for 100 replicates for each of the scenarios. In all simulations we used 500 lineages in the tree as the stopping criteria. For the simulations with multiple rate classes (scenarios B, C and D), we used a meristic Markov model to control the transitions between rate classes such that each transition would represent a gradual change from the fastest rate class to the slowest by passing through the intermediary ones (see Figure S1). We fitted the 18 models shown in the Area-dependent and area-independent simulations --Results with scenarios A to D show that our GeoHiSSE models, in overall, are adequate to study rates of diversification dependent or not on geographical ranges. Figure S3 shows a summary of the results. In many cases multiple models with congruent diversification histories showed high Akaike weights. For example, in simulation scenario A, model 2 is an original implementation of the area-dependent GeoSSE model without hidden states whereas model 8, which also showed high Akaike weight in part of the simulations, is an area-dependent GeoSSE+extirpation model.
In the case of the area-independent simulation scenarios B and C, there are multiple models with high Akaike weight. However, every model showed in gray in Figure S3 (and Figure S4) are area-independent models. Simulation scenarios B and C show replicates with high Akaike weights for the models 3, 6 and 9 (Table 1, main text). Models 3 and 6 are instances of the area-independent model with different number of rate categories; model 3 has three hidden states and assumes that all transitions (including dispersions and transitions between rate classes) are constrained to be symmetrical and model 6 has two hidden states, but transition rates are estimated from the data. Model 9 is another area-independent model with symmetrical transition rates, but allows for rates of local extinction and range reduction to be estimated separately.
Results for simulation scenarios A to C are examples of the utility of applying model-averaging to estimate parameters taking into account the uncertainty in model choice.
Here different models show high Akaike weights on the simulations, but all these models are congruent with the generated data. The fact that there is uncertainty associated with which model show high Akaike weights has to do with the signal in the simulated data. Tree shape, frequency of observed ranges across lineages, distribution of branch lengths and etc, all vary among the simulation replicates within each scenario. Thus, it is natural to expect some level of model uncertainty, especially when fitting more realistic models that take into account heterogeneous rates of diversification associated or not with the observed species ranges, such as in our study. It is important to note, however, that model uncertainty is different from process uncertainty. Note that the diversity of models observed for each simulation scenario do not vary in function of whether the process is area-dependent or area-independent diversification. Although there is variance on the support for each model across replicates, the conclusion of whether the process is The last simulation scenario (D) is an area-dependent scenario with three rate classes.
Results are congruent with simulation scenario A, but with more variance in Akaike weights among models. It seems that model uncertainty is somewhat associated with rate heterogeneity across the tree, which is not surprising, given that this is the main reason for the inadequacy of Model performance under extreme scenarios --The simulated scenarios from A to D followed a joint tree and geography model of evolution, where diversification rates were tied to geographical areas and range evolution occurred through cladogenesis (i.e., through speciation of widespread lineages), or along the branches of the tree (i.e., due to dispersion and extirpation).
However, our knowledge of the processes that led us to observe lineages in particular geographic areas is often incomplete and empirical data can behave in ways not expected by our models. In other words, simulations based solely on SSE models, including hidden states or not, as generating models are naive surrogates with respect to empirical data sets. Here, we study two extreme cases of geographic range evolution with the objective of identifying odd behaviors when simulating data sets 1) where widespread ranges are rare or absent in extant species, and 2) where the evolution of areas are not tied to cladogenetic events.
Transitions between endemic areas in GeoSSE and GeoHiSSE are modelled as a two-step process. First, an endemic lineage disperses and becomes widespread and then it can either undergo cladogenesis, which generates two endemic sister lineages (one in area 0 and another in 1 ), or a local extinction in one of the areas reduces the range to endemic again. If extant widespread lineages are rare or absent, the information to infer cladogenetic and dispersion events between endemic ranges become limited, possibly leading to issues with parameter estimates and model adequacy. We first simulated datasets with widespread lineages as being rarely observed at the tips by generating data and trees under a GeoSSE model with speciation rates of widespread lineages five times faster than endemics (see scenario E in Table S1). This produced data sets with an average of ~7%, out of 500, extant lineages occupying widespread areas ( Figure S2A). However, parameter estimates across all simulation replicates showed that the low frequency of widespread extant lineages does not prevent our set of models from reaching meaningful estimates using model-averaging ( Figure 4E).
443
Alternatively, range expansion could have been rare throughout the history of the group whereas jump dispersal events (i.e., direct transitions between endemic distributions) have played an important role. To simulate such a scenario we used a GeoSSE model, but we allowed lineages to disperse between endemic areas without becoming widespread first (scenario F). This scenario resulted in an average of only 4%, out of 500, extant species occupying widespread areas ( Figure S2B). However, there is no evidence for a significant bias in parameter estimates for both area-dependent diversification rates or between-area speciation rates ( Figure 4F). On the whole, our approach of model-averaging across a large set of candidate models does not appear sensitive to rare extant widespread areas.
Finally, we explored the extreme possibility that the widespread range is completely absent both in extant distributions and in the evolutionary history of the group. In this scenario, changes in geographical distribution are the result of a) jump dispersal events between endemic areas or b) speciation events in one of the endemic ranges leading to one sister lineage occurring in the other area (see Magnuson-Ford and Otto, 2012). We relied on BiSSE-ness, the cladogenetic model for binary states (Magnuson-Ford and Otto, 2012), in order to simulate data sets that result in only two endemic areas observed at the tips (scenario G). When fit to our model set, the absence of widespread areas among the extant species produces estimates of the rates of between-area speciation ( s AB ) that are highly uncertain ( Figure 4G). The 95% density interval for the model-averaged estimate of s AB across nodes spans the extreme wide interval between 4 and 58 units. These estimates are orders of magnitude higher than the rates of speciation associated with each of the endemic regions. In contrast, estimates for the relative difference in within-area net diversification rates associated with each endemic area did not show a strong bias ( Figure 4G), suggesting that poor estimates for between-area speciation does not strongly bias our conclusions about range-dependent diversification rates.
All previous scenarios assumed that cladogenetic events were important in the evolutionary history of the lineages. This is a very plausible element of the model , since the data is expected to describe geographical ranges. However, here we also considered the performance of the model when this is not the case, perhaps because the coarse subdivision of ranges required by GeoSSE is grossly inadequate for the study system. For this, we generated datasets with transitions between areas restricted only to anagenetic dispersal events along the branches of the tree. We simulated trait-independent phylogenetic trees with two rates of diversification following Rabosky and Goldberg (2015). We then generated datasets using only a simple transition-based Markov model by restricting transitions between endemic areas to always pass through the widespread area (see the anagenetic GeoSSE model in Figure 3 of the main text, middle panel). The difference in within-area rates of diversification is larger than observed in any other simulation scenario ( Figure 4H). Moreover, the absence of cladogenetic events makes estimates for between-area speciation ( s AB ) uncertain, although raw values for the parameter are within the same order of magnitude of the true rates of diversification across the tree (grey lines in Figure 4H).
482
reduction happens when a widespread lineage ( 01 ) becomes locally extinct in one of the areas, leading to an endemic distribution area 0 (when extirpated from 1 ) or area 1 (when extirpated from 0 ). In contrast, the extinction of endemics result in the complete extinction of the lineage.
The original parameterization of the GeoSSE model maps both events to a single extirpation rate parameter associated with the endemic area 0 (x 0 ) and 1 (x 1 ). Goldberg and colleagues (2011) do consider the expansion of GeoSSE into different, often more parameter rich, variants, but no work so far had explored the effect of separating rates of range reduction from the extinction of endemics. We performed a series of simulations to test whether we can properly estimate separate rates for range reduction ( d AB->A and d AB->B ) and extinction of endemic lineages ( x A and x B ) using our expanded GeoSSE+extirpation model. We compared the parameter estimates between the original GeoSSE and the GeoSSE+extirpation model in the absence of hidden states.
In order to estimate separate rates for range reduction and extinction of endemic lineages the model needs to be expanded to include one more rates for each endemic area. The GeoSSE+extirpation model has a rate of range reduction for each area (parameters x 0 and x 1 ) and a separate rate of extinction of endemics (parameters x* 0 and x* 1 ). The GeoSSE+extirpation model can be extended to include hidden states, which we refer as the GeoHiSSE+extirpation set of models. Like all other GeoHiSSE models, the rates of range reduction as well as extinction of endemics are associated with the hidden states. Thus, for a GeoHiSSE+extirpation model with 2 hidden states we would have the parameters: x 0A , x 0B , x 1A , x 1B for range reduction and x* 0A , x* 0B , x* 1A , x* 1B for extinction of endemics.
[Please see notes about model complexity and the need for more species in the phylogeny in the main text.] Here we test if it is possible to differentiate between rates of range reduction from the extinction of lineages in endemic areas using the GeoSSE+extirpation model. We simulated phylogenetic trees and data under four distinct scenarios (Scen ext_A to ext_D, see Table S3).
For the first scenario we set range contraction and dispersion to be more frequent than extinction of endemic lineages (Scen ext_A). This scenario models the case in which events of dispersal and range contraction occur at the same rate, but the extinction of endemic lineage are relatively rare. In other words, recent dispersers face a higher chance of being extirpated from the area than established lineages. For the second scenario we kept the same generating values used for ext_A, but we increased the rate of extinction of endemic lineages in area 0 (Scen ext_B).
Diversification rates associated with area 1 are higher than in 0 , due to an increase in extinction in area 0 . The first and second scenario test the performance of our estimates for separate rates of range reduction and extinction of endemics as well as if such processes can carry a signal of area-dependent diversification. In the third and fourth simulation scenarios we changed the processes described for simulations ext_A and ext_B. Scenario ext_C repeats the same generating values as simulation ext_A, but we increased the rate of dispersion from area 0 to the widespread region 01 . In this case, extinction of endemics is still rarer than dispersion, but dispersion events from area 0 are now twice as frequent as from area 1 . With this we can explore if there are important confounding factors among the anagenetic events (i.e., dispersion, range reduction, and extinction of endemics). Finally, scenario ext_D flips the relationship between range reduction and extinction of endemics assumed in the previous simulations. Now extinction rates of endemic lineages are higher than the rate with which widespread lineages become endemic.
Scenarios ext_A to ext_C assume that recent dispersers are likely to lose part of their range whereas scenario ext_D explores the case that lineages are much more prone to extinction when they become restrict to endemic distributions than to have their range contracted.
We fitted a reduced model set with four models using Maximum Likelihood Estimate and estimated parameters by performing model averages using the Akaike weights for each of the models. Since the aim of these tests are to show the performance of the GeoSSE+extirpation models with respect to the original GeoSSE models, we decided not to include any instance of the GeoHiSSE or anagenetic versions of the GeoSSE model. Here we use a collection of area-independent and area-dependent models with and without separating range reduction from extinction of endemics (see Table 1 In terms of model weight, results were similar across simulation scenarios ext_A to ext_D. Both area-independent models 1 and 3 showed higher Akaike weight across all simulation replicates ( Figure S7). The different rates of extinction (Scen ext_B) or dispersion (Scen ext_C) associated with areas simulated in the data show no reflection in model weight when compared to other simulation scenarios (Scen ext_A or Scen ext_D). However, when we look to the parameter estimates for each of the models across the 100 replicates there is strong evidence that GeoSSE+extirpation models are able to correctly recover the generating parameters for the different scenarios ( Figure S7). Parameter estimates across all simulations (Scen ext_A to Scen ext_D) show that we can adequately distinguish between rates of range reduction and rates of extinction of endemic lineages. However, these results are conditioned on a phylogeny with 500 species and we strongly recommend performing similar tests if planning to use smaller trees. Figure S8: Distribution of Akaike weights for the model set fitted to simulation scenarios ext_A to ext_D. Box plots in grey are area-independent models and in red are area-dependent models. See Table 1 (main text) for description of the models and Table S3 for parameter values used for the simulations. Figure S9: Distribution of parameter values across 100 simulation replicates for each of the scenarios ext_A to ext_D. Here 'AIDiv' denotes area-independent models and 'ADDiv' denotes area-dependent models. Rows represent simulation scenarios whereas columns are different models. Columns 1 and 2 show original GeoSSE models (7 parameters) and columns 3 and 4 are GeoSSE+extirpation models (9 parameters). Parameters linked by '~' were constrained to the same value during Maximum Likelihood estimation. The blue horizontal lines show the values of the parameters used to generated the data for each scenario (note that scale in y axes vary). | 5,594.4 | 2018-10-07T00:00:00.000 | [
"Computer Science"
] |
AIM for Allostery: Using the Ising Model to Understand Information Processing and Transmission in Allosteric Biomolecular Systems
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular “action at a distance” is termed allostery. Although allostery is ubiquitous in biological regulation and signal transduction, its treatment in theoretical models has mostly eschewed quantitative descriptions involving the system's underlying structural components and their interactions. Here, we show how Ising models can be used to formulate an approach to allostery in a structural context of interactions between the constitutive components by building simple allosteric constructs we termed Allosteric Ising Models (AIMs). We introduce the use of AIMs in analytical and numerical calculations that relate thermodynamic descriptions of allostery to the structural context, and then show that many fundamental properties of allostery, such as the multiplicative property of parallel allosteric channels, are revealed from the analysis of such models. The power of exploring mechanistic structural models of allosteric function in more complex systems by using AIMs is demonstrated by building a model of allosteric signaling for an experimentally well-characterized asymmetric homodimer of the dopamine D2 receptor.
Introduction
Complex molecular assemblies and networks of interacting biomolecules mediate many cellular processes, such as cell growth, metabolism, and signaling. Molecular components of such assemblies and networks have been visualized and structurally elucidated at atomiclevel resolution with experimental techniques including x-ray crystallography [1], nuclear magnetic resonance (NMR) [2], and cryo-electron microscopy (cryo-EM) [3]. The combination of structure elucidation with the application of biophysical methods reporting on the dynamic properties of the molecules (e.g., single molecule Forster resonance energy transfer (smFRET) [4], electron paramagnetic resonance (EPR) [5], Molecular Dynamics (MD) simulations [6], and elastic network models [7]) has produced detailed information regarding functional mechanisms. The application of these powerful methods of molecular biophysics has illuminated, especially in proteins, the large ensemble of conformations involved in the functional mechanisms of biomolecules, and hence the importance of conformational entropy. This conformational entropy is much higher than expected from crystal structures alone, and the relatively discrete structural elements comprising these systems (i.e., loops, α-helices, β-strands, and a large number of tertiary structures in proteins) often exhibit coupled conformational dynamics. These coupled dynamics are especially crucial in receptor proteins, which are used to process and transmit information in their signaling function. For example, transmembrane receptor proteins, such as the G protein coupled receptors (GPCRs), bind extracellular ligands that trigger receptor "activation", which is reflected by a change in conformation on the intracellular side of the protein where the transduction of the signal into the cell is accomplished [8]. This type of "action-at-a-distance" in the modulation of a specific function is referred to as allostery [9]. While allostery has been documented in many systems and has been suggested to be present in nearly all proteins [10], it is still unclear how most allosteric mechanisms work at the molecular level. A strong theoretical basis for allostery is needed, however, because such mechanisms are ubiquitous and essential for the transduction of signals and transmitting information both within proteins and throughout cellular systems. In addition, while there has been some success in engineering allosteric proteins from pre-existing components and scaffolds, a lack of detailed understanding has placed de novo design out of reach [11].
Considerations of theoretical models of allostery have generally followed a thermodynamic approach [9,12,13]. When biochemical measurements of the functional output of proteins can be made, the allosteric efficacy [14], which has also been called the allosteric coupling constant [15], can be used as a good measure of a ligand's allosteric influence on the protein's functional state. For the case of receptors, this downstream signal transduction can be measured experimentally. Assuming that the receptor has two states, on and off, an allosteric efficacy, α, can be defined as: where K bound and K unbound are the equilibrium constants for the activation reactions of the receptor when bound or unbound to the allosteric ligand, respectively. An equilibrium constant can be defined in terms of concentrations or rate constants: (2) where [R on ] and [R off ] are the steady state concentrations of the receptor in the on and off state, respectively, and k on and k off are the corresponding rate constants for the transition to the on and off states (see Figure 1). The concentrations of the two receptor populations can be inferred from biochemical measurements of function, and the allosteric efficacy of the ligand of interest can be calculated from (1) and (2). When α > 1, the on state of the receptor is preferred in the presence of ligand and the ligand is considered an agonist (activator of function), and when α < 1, the off state of the receptor is preferred in the presence of ligand and the ligand is considered an inverse agonist (inhibitor of function). When α is 1, the ligand has no effect on the functional state of the receptor and the ligand is considered a neutral antagonist (inhibitor of activation by another ligand). This type of allostery, in which the equilibrium constant is modified by the ligand, is often described as "K-type", as opposed to those that change enzyme catalysis in terms of k cat or V max , which are described as "V-type" [15].
It is possible to conceptualize the allosteric efficacy of a ligand as a steady state signal-tonoise ratio, where the signal for the presence of a ligand in the binding site is encoded in the receptor on/off equilibrium constant that is sensed by the intracellular proteins that detect the signal by interacting with the receptor population. In the absence of ligand the equilibrium constant is non-zero (i.e., the probability of the receptor being active is non-zero), creating noise.
To obtain a formal definition of the allosteric efficacy in this context, it is possible to write the signal-to-noise ratio, SNR, as: (3) where P x is the power of x defined as: (4) Therefore, the power of the equilibrium constant signal can be written as: (5) and because at steady state the equilibrium constant is invariant with time by definition: (6) then: (7) Accordingly, the allosteric efficacy of an agonist is a measure of the signal-to-noise ratio of signaling through the receptor with that agonist. If the ligand is an inverse agonist, the pertinent measure is the equilibrium constant for the inactivation reaction, so that the signalto-noise ratio is simply α -1 . When both the signal and noise are Gaussian, the Shannon-Hartley theorem [16,17] relates the signal-to-noise ratio to the information theoretical channel capacity C (which is the upper limit on the information rate or mutual information), by: (8) where B is the bandwidth of the channel. While Equation (8) is not directly applicable to the allosteric efficacy, as the signal and noise are not Gaussian, the treatment of allostery as an information transmission process has had much success recently [18][19][20], and we will confirm a strong relationship between the mutual information and allosteric efficacy later in the manuscript.
An energy-based expression of the allosteric efficacy can be written as the difference in free energies, G, of the four respective states: (9) where R is the gas constant and T is the temperature. This model can be extended to systems with multiple ligand binding sites and/or allosterically regulated sites (for a detailed review, see [13]), but it clearly provides only a phenomenological explanation of allostery. According to this description, often considered "the thermodynamic" perspective, allostery occurs because of the differences in free energy of the respective states. However, this conclusion appears to be a definition, i.e., that allostery is the phenomenon in which the stability of the on state relative to the off state is greater when the ligand is bound, and lesser when the ligand is unbound. From a "structural" perspective, one needs to consider the differences in free energy as emerging from some feature of the underlying network of interacting structural components, and it is this feature that makes the system allosteric.
To understand allostery at a level that explains the structural context for how allosteric biomolecular systems work requires a quantitative theoretical description that bridges the features of the structural components and their interactions, to the thermodynamic allosteric parameters. We address this problem in the next section.
The Thermodynamic Allosteric Efficacy as a Function of Local Interactions
We approach the problem of "how allostery works" by studying the statistical mechanics of interacting structural components. These structural components may be any subset of a biomolecular system that can be treated as a unit when described at some level of coarsegraining (i.e., a helix, a β strand, a helical bundle, a binding site, etc). The approach we will pursue is conceptually similar to the ensemble allosteric model (EAM) [12], but with the goal of introducing a structural context that can be analyzed analytically. Defining an ncomponent system X where for a single configuration each component can be in one of an arbitrary number of discrete states, we write the potential energy function of a given configuration of X, U(X), as: (10) The first term in (10) represents the conformational energy of each state of each component independent of other components, and the second term represents the pairwise interaction energy between components; all interaction terms when i = j are 0. We can write the probability of any conformation of the system according to the Boltzman distribution as: (11) β is 1/k B T, where k B is the Boltzmann constant and T is the temperature in Kelvin. The numerator is known as the Boltzmann factor, and Z is the partition function, which sums over the Boltzmann factors of all states and normalizes the probability: (12) We can then define the specific case of ligand binding to a two-state receptor. This system can be defined as a two-component system in which each component is two-state: one component representing the receptor, R, with states on and off, and the second component representing the ligand, L, with states bound and unbound. It should be noted that for the ligand, the conformational energy term represents the component of the binding energy that is independent of the state of the receptor. Using the explicit definition of the concentration: (13) where N x is the number of molecules of X and V is the volume, we can rewrite (2) with the explicit definition of protein concentration: (14) where N is the total number of receptors and f on and f off are the fraction of receptors in the on and off states, respectively. Given that the system is ergodic, the frequency of a given state at steady state will converge to the ensemble probabilities. Rewriting (1) by substituting thermodynamic equilibrium constants with ratios of probabilities, we can define the allosteric efficacy as: (15) Using (10) and (11), we can write (15) as: (16) Equation (16) reduces to: (17) We then find the analogous expression of (9): (18) As (18) indicates, the allosteric efficacy is a function the interaction energy between the states, and we have succeeded in expressing the thermodynamic allosteric efficacy as a function of local interactions in our simple two-component ligand/receptor system. However, this result is significantly more useful for considering multi-component systems if additional energetic symmetries are imposed by using an Ising model potential energy function. While these symmetries are not strictly realized in a biomolecular system, we will show that their application leads to concise analytic expressions that are qualitatively and quantitatively accurate as well for systems in which these symmetries are not present.
The Allosteric Ising Model (AIM) for Multicomponent Systems
The Ising model is a statistical mechanical model originally developed to describe phase behavior in ferromagnetic materials [21]. The Ising model, as well as Ising-like models, have since been applied to other complex systems with collective behavior [22,23], including cooperativity during folding [24][25][26] and in oligomeric assemblies [27,28]. In the Ising model, each particle has two states, corresponding to a spin state of up or down: (19) The potential energy function of an n-component Ising model is: (20) In the Ising model, hi is the potential energy of particle i due to the magnetic field, and j ij is the spin coupling between particles i and j, where j ii is taken to be 0. If the field term is taken to be site-specific, one can see that the field term can be considered to correspond to the conformational energy, and the spin coupling term to the pairwise interaction energy. We can rewrite the potential function as: (21) where is the conformational energy of component i, and is the interaction energy of components i and j. By using (21) for the potential energy function, we impose the following symmetries on the two-state components (with binary states represented by up and down arrows): (22) For Ising models composed of several components and various interaction topologies, these symmetries allow for concise analytical expression for the allosteric efficacy and binding affinity. We will refer to these models as Allosteric Ising Models (AIMs).
Considering the analogy to the ligand(L)-receptor(R) systems and treating the on/off and bound/unbound states as up/down spins (see Figure 2A), the potential energy function according to (21) can be written as: (23) LeVine and Weinstein Page 7 As the interaction energy between the receptor and the ligand must be zero when the ligand is in the unbound state, we write an alternative non-Ising potential energy function where the interaction energy is 0 when the ligand is unbound: (24) This equation can be re-written as an Ising model potential energy function: (25) Thus we will proceed with (23) despite the seemingly non-physical interaction, and later confirm that the relationships derived using this model accurately represent those of non-Ising systems. The allosteric efficacy using this potential energy function is: (26) and we can simplify (17) to: (27) Equation (27) indicates that in the Allosteric Ising Model for the ligand/receptor system ("ligand/receptor AIM"), the allosteric efficacy is simply a function of the ligand-receptor interaction energy term. Positive allostery (agonism) is attributed to negative interaction energy; negative allostery (inverse agonism) is attributed to positive interaction energy. Note that as the interaction energy between the ligand and receptor is related to the allosteric efficacy by a log transformation, we will use here the allosteric efficacy and interaction energy interchangeably, and specifically use interaction energy for visual representations, where the log scale is required.
The two-component model assumes that the protein is entirely rigid, with two global states. However, it is possible for the ligand to allosterically modulate multiple distinct allosteric sites (see Figure 2B). It is well known, for example, that GPCRs can signal through multiple downstream signaling pathways through coupling to various G protein subtypes and βarrestin [29,30], and that different ligands can differentially activate these pathways [31,32]. Therefore it may be necessary to distinguish among multiple allosteric sites in the representation of receptor allostery. If we introduce two non-interacting allosteric sites, A 1 and A 2 , we can write the potential energy function as: (28) Then the allosteric efficacy at a site is: (29) The probability of each state is the sum of the probability of two underlying states: (30) which is equal to: (31) This reduces to: (32) which indicates that the allosteric efficacy of a ligand at an allosteric site is independent of other allosteric sites it modulates as well (provided the allosteric sites are not coupled through another interaction). In terms of receptor signaling, this analysis predicts that there could exist ligands with absolute bias for only one signaling pathway. This would require the downstream effectors (e.g., the G proteins or β-arrestin for GPCRs) to interact with unique and independent allosteric sites.
Representation of allosteric propagation through specific regions within the protein
In addition to the existence of multiple allosteric sites, allosteric conformational coupling can be propagated through specific regions within the protein, often called "paths" or "channels". Using the AIM approach described here, we can expand the treatment of allostery to proteins with multiple structural components, where some components are allosterically regulated, and some others mediate the allosteric regulation. We begin with a three-component model, composed of the ligand L, a channel C, and an allosteric site A (see AIM represented in Figure 2C). The potential energy function is: (33) The allosteric efficacy is then: LeVine and Weinstein Page 9 Entropy (Basel). Author manuscript; available in PMC 2015 November 19.
Equation (34) simplifies to: (35) where cosh is the hyperbolic cosine function: (36) It should be noted that the exponential term in (35) is the conditional allosteric efficacy. The conditional allosteric efficacy can be written as the sum of weighted allosteric efficacies, with each allosteric efficacy conditioned on a different state of the channel and then weighted by the corresponding probability of that state: (37) where for a given state, s, of C: (38) Equation (38) simplifies to: (39) Comparing (39) with the allosteric efficacy of the two-component ligand/receptor system expressed in (27), it is clear that the conditional allosteric efficacies in the three-component system are simply the allosteric efficacies of the corresponding two-component systems.
We can then differentiate the allosteric efficacy contributed by the direct interaction of two components, i.e., the conditional allosteric efficacy, from the indirect contributions, and write: where the allosteric efficacy contributed by the indirect interaction is: (41) Importantly, (41) provides a description of the allosteric efficacy as a function of the channel through which it is propagated. There are immediate inferences that can be drawn from this representation. First, the channel must have little preference for either one of its conformations, so that signaling through it can have a high intrinsic signal-to-noise ratio. Based on this inference, mutations that further stabilize the intrinsically preferred conformation of a channel will decrease the allosteric efficacy of a ligand, whereas mutations that destabilize that conformation will increase the allosteric efficacy. The existence of these two classes of mutations has immediate implications for the ability to test experimentally the role of specific domains in allosteric signaling. Second, because allosteric transmission through the channel depends on a balance between the channel's conformational energy and the interaction energy between the channel and ligand, and the channel and allosteric site, it follows that a low intrinsic signal-to-noise ratio can be overcome by an increased coupling of the ligand to the channel. Lastly, if the sign of the coupling of the ligand to the channel is opposite that of the channel to the allosteric site, the allosteric signal can be reversed. Consequently, a binding site on a protein that has been evolved for positive allostery by endogenous ligands, can be targeted as a site for negative allosteric modulation, and vice versa. It is well known that endogenous agonist-binding sites can be targeted by inverse-agonists, so this result is anchored in experimental evidence.
The Channel as a Chain of Interacting Structural Components
Comparison of (35) with (39) indicates that the allosteric efficacy can be written in terms of the conditional allosteric efficacies due to direct interactions: (42) In effect, the conditional allosteric efficacy is the signal-to-noise ratio for a single step in the signal propagation process, and the effective signal-to-noise ratio for the entire signal propagation system can be described by a non-linear function of all the constituent propagation steps.
Equation (42) can also be written as the effective interaction energy, : (43) and thus as the sum of the direct and indirect interactions: (44) It should be noted that the designation of channel versus allosteric site is purely an operational definition in which the site that performs the function of interest is referred to as the allosteric site. If both sites are functional, such as the example of two independent allosteric sites described above, and if they interact, we can rewrite (42) as: (45) The description of the allosteric efficacy as a function of the channel through which it is propagated, in (41), indicates that if the channel is a one-dimensional chain of interacting structural components, the allosteric efficacy is quickly diminished (it has been shown that the spin correlation function decays exponentially with distance in one-dimensional Ising models [21]). In Figure 3, the effective interaction energy between the first and last components of one-dimensional Ising chains with uniform conditional allosteric efficacies of 10, 100, 1,000, 10,000, and 100,000 are shown as a function of chain length. For weakly interacting systems, channels formed by structural components interacting in series do not appear to be good mediators of allosteric efficacy. The prevalence of multi-segment transmembrane signaling complexes may indicate an evolutionary mechanism to overcome the limitations of serial channels.
Comparison of Allosteric Propagation in Ising and Non-Ising Systems
As described in Section 2.2, the above analysis is made possible through the energetic symmetries imposed by the Ising model. However, it is unlikely these energetic symmetries exist in real allosteric proteins. Thus, it is important to consider how well the relationships derived from AIMs describe non-Ising two-state models, which are expected to be better representations of the types of interaction networks present in the biomolecular systems of interest.
To consider this problem, we sampled 100,000 non-Ising two-state allosteric systems with interaction energies and configurational energies sampled from normal distributions of mean 0 and standard deviation of β −1 , 3/β, or 5/β. The exact allosteric efficacies, calculated from the exact probabilities of each state, were then compared to the allosteric efficacies estimated from (42) using the direct allosteric efficacy terms. We should note that while direct allosteric efficacies can be calculated for non-Ising model, the calculation of the configuration energy term followed: (46) As above, we addressed problems that may arise from the non-physical interaction energy between unbound ligand and the protein by setting to 0 all interaction energies with the unbound ligand. Results of these calculations are shown in Figure 4, where the corresponding effective interaction energies have been used for clarity. Our calculations indicate that (42) is a good estimate of the true allosteric efficacy in non-Ising systems in which the allosteric efficacy is high (see Figure 4A). As the standard deviation on the energy term distribution increases, and more systems have significant deviation from Ising-like behavior, two distinct groups of false positives (exact effective interaction energy is 0 but estimated interaction energy is non-zero) and true negatives (exact effective interaction energy is non-zero but estimated interaction energy is 0) do appear, but the sign of the allosteric modulation is conserved (see Figures 4B,C).
That the model maintains high accuracy for systems with high allosteric efficacy in spite of the two groups of inaccuracy (i.e., false positives and true negatives), suggests that this model should reflect many of the qualitative and quantitative properties of actual allosteric systems.
A Relation of AIMs to the Structural Dynamics Analysis of Biomolecular Function
Efforts to identify allosteric sites and channels in the structures of functional biomolecules have utilized estimates of correlation or mutual information between the structural dynamics of known allosteric sites and candidate modulation sites or channels, most often based on the analysis of molecular dynamics (MD) trajectories [33,34,18,19] or elastic network models (ENMs) [35,36]. Equation (43) indicates that structural components that can act as channels will have high effective interaction energy with known allosteric sites (e.g., ), and the Shannon-Hartley theorem, (8), suggests that the allosteric efficacy can be related to the mutual information via the channel capacity. It is not clear, however, how this relates to the mutual information that is evaluated from an MD simulation. As we and others have used mutual information successfully to interpret the structural dynamics and allostery from MD trajectories [18][19][20], it is interesting to test the use of mutual information as an identifier of allostery in the context of AIMs. To this end we calculated the symmetric uncertainty [37], a normalized variant of the mutual information, between each component in two-component Ising models and two-component non-Ising models, and compared the symmetric uncertainty to the absolute interaction energy. The symmetric uncertainty (SU) between components is: (47) where I is the mutual information: (48) and H is the Shannon entropy: (49) We generated 100,000 two-component Ising systems and 100,000 two-component non-Ising systems with energy terms sampled from a normal distribution with mean 0 and standard deviation of 1, and calculated the symmetric uncertainty and allosteric efficacy of each. We find that the symmetric uncertainty enforces a lower limit on the allosteric efficacy, and allosteric efficacy increases with higher symmetric uncertainty (see Figure 5). Thus, mutual information is a good predictor of allosteric activity in the two-state models explored here. The use of mutual information in systems that are not two-state will be discussed further below.
AIMs and Multiple Allosteric Channels
Many proteins have been suggested to have multiple allosteric channels [38]. Assuming that the channels are independent, careful algebra (not shown) reveals that to study the allosteric efficacy of a multi-channel system one can iteratively replace the direct interaction energy term with a direct interaction and indirect interaction of the same effective interaction energy. The effective interaction energy due to multiple independent channels is additive: (50) and the allosteric efficacy is then multiplicative: (51) This formally obvious result reveals the advantage of multiple channels in an allosteric protein: perturbations such as mutations that disrupt the conformational stability of one channel will not abolish allosteric function completely. Many parallel weak channels introduce significant robustness when compared to the allosterically equivalent single strong channel built in series, because the latter is completely eliminated by disruption of even a single interaction between two of its structural components.
To test the ability of Equation (51) to reflect accurately the behavior of non-Ising systems, we again constructed 100,000 two-and three-channel non-Ising allosteric systems using the methodology described for single channel systems, and compared the resulting allosteric efficacy to that calculated using (51) (see Figure 6). Again, we find good agreement between the estimates using (51) and the exact calculated efficacies, although the accuracy is slightly reduced as the number of channels increases from two to three.
Because it is unlikely that allosteric proteins consist of absolutely independent channels, we explored the effect of interaction between channels through the use of two AIMs: one twochannel system where both channels provide equal magnitude positive allosteric coupling, and one two-channel system where both channels are of equal magnitude but opposite direction. The allosteric efficacy was calculated for each system as a function of the interaction energy between the two channels of allostery for ligands that are coupled to one, or both channels.
As depicted in Figure 7, we found that when two channels mediating positive allosteric modulation have a negative interaction energy, the allosteric efficacy of the ligand is increased, even if the ligand only interacts with one channel ( Figure 7A). This is not unexpected; the second channel acts as an indirect channel from the first channel to the allosteric site and additionally multiplies the allosteric efficacy of the channel. However, if the ligand interacts with both channels, the allosteric efficacy is not the square of the allosteric efficacy of binding to one channel as would be for two identical, independent channels. This is because the interaction of the ligand with the first channel has already partially shifted the conformational distribution of the second channel, decreasing its channel efficacy by effectively increasing its intrinsic conformational preference (and thus its intrinsic signal-to-noise).
For the second two-channel system, with channels providing allosteric coupling in opposite directions, we find that when the interaction energy between the channels is negative, there is decreased allosteric efficacy for the ligand in either channel, whereas positive interaction energy between the channels leads to increased allosteric efficacy ( Figure 7B). From the perspective of the positive channel, if the channels have a negative interaction energy, the second (negative) channel is an indirect channel that flips the sign of the allosteric signal; this leads to reduced overall allosteric efficacy due to negation. However, if the two channels have a positive interaction energy, the signal through the second channel is flipped twice and left unchanged, leading to increased allosteric efficacy. Interestingly, if the ligand interacts with both channels equally, the effective interaction energy from this pair of channels is 0, independent of the interactions between the channels. In a receptor with these characteristics, antagonists could interact with each channel without conformational preference for the channel, or interact with both channels with the same sign, leading to no allosteric signal.
Illustration of AIM-Based Analysis of Allosteric Coupling Mechanisms: The Asymmetric D2 Receptor Homodimeric Signaling Complex
The application of the new formalism based on AIMs was used thus far to represent small, ideal systems in order to extract insights into the physics of allostery on a conceptual level. To examine the practical implementation of AIMs for real allosteric proteins of biological interest, we chose to construct AIMs consisting of a small number of structural components where the numerical calculations of allosteric properties can be performed easily. Such use of AIMs as a coarse-grain level of representation is advantageous in testing hypotheses about the underlying structural mechanisms of real allosteric proteins. This concept is illustrated here with the example of a well-characterized GPCR dimer system. We constructed a model of asymmetric signaling in the dopamine D2 receptor (D2R) homodimer, based on the structural model of the asymmetric dimer and the constructs used to explore its function that were published recently [39]. Because D2R can signal as both a monomer and a homodimer, a novel experimental construct developed in the Javitch lab [39] was required to make possible the characterization of the dimer as a signaling unit. The results demonstrated experimentally that the D2R homodimer cannot signal through each monomer simultaneously, but instead signals through a single protomer at a time in an asymmetric manner (the signaling protomer will be referred to as "protomer A"). Furthermore, the results indicate that the function of the protomers is characterized by negative cooperativity: the stabilization of the on state of the non-signaling monomer ("protomer B") by agonist biding decreases signaling by protomer A, whereas the stabilization of the off state of protomer B by the binding of an inverse agonist increases signaling by protomer A. Lastly, it is shown in [39] that perturbations known to completely disrupt activation in the monomer, including: (i)-ablation of ligand binding, (ii)-removal of intracellular loop 3 (IL3), and (iii)-mutations introduced in (a)-intracellular loop 2 (IL2), (b)the conserved DRY motif, and (c)-the conserved NPxxY motif, all disrupt activation in the homodimer when applied to protomer A. Unexpectedly, however, the perturbations in (iii) also disrupt activation when applied to protomer B.
A molecular model of the homodimer complex with the G protein that senses the activation of the receptor was constructed in [39] to explain the experimental results in a structural context. The template for this model was the active state crystal structure of another GPCR, rhodopsin, bound to its G protein, transducin. In this molecular model the interface of the homodimer involves the 4th transmembrane segment (TM4), and the G protein interacts with the signaling protomer A through IL3, IL2, and helix 8 (H8), while protomer B interacts through its IL2 and H8 (see Figure 8). We used AIMs as described below to explore the feasibility of the allosteric properties proposed for this structural model.
Based on the experimental measurements of activation, an AIM representing the homodimer was constructed starting with a model for a signaling monomer (monomer A) and a G protein that can bind this monomer and become activated. Since the IL2, DRY, and NPxxY mutations behave identically in the experiments, we represented all three as a single structural component termed the conserved binding motifs (CBMs), due to their role in G protein activation by the GPCR [40][41][42][43][44]. In this AIM (see Figure 8A), the signaling monomer is composed of the following structural components: a ligand that can bind and unbind, a transmembrane domain, and two intracellular regions (IL3 and the CBMs); the G protein is composed of a structural component that can bind and unbind the signaling monomer, and one that can be activated. The conformational energies of the components of each protomer were chosen to prefer the off state (u conf =1), and the interaction energies between all components were negative such that they preferred to be in the same state (u int = -1). We find that this coarse grained model responds as expected to agonists, antagonists, and inverse agonists (see Figure 8B). To create a homodimer with negative cooperativity, we then added to the AIM a negative cooperativity between the one monomer that can bind G protein (which is now protomer A) and one that cannot (protomer B), represented as a positive interaction energy between their transmembrane domains (see Figure 8C). We then calculated the allosteric efficacy for the homodimer when protomer A was bound to agonist and protomer B was simultaneously bound to either an agonist, an antagonist, or an inverse agonist. This model reproduces the observed negative cooperativity (See Figure 8D) To explore the effects of removing IL3 and introducing the CBM mutations, we constructed AIMs with the perturbations modeled as either: i) stabilizing the off state of the mutated structural component, ii) stabilizing its on state, or iii) reducing the interaction energy between the structural component and the G protein to 0. Modeling the two perturbations in protomer A by imposing (i) or (iii), reduced activation as expected. However, stabilizing the off state of IL3 in protomer B increases activation in our model when it should have no effect, indicating that treating the IL3 mutation such that it eliminates interaction between IL3 and the G protein is a better model. On the other hand, treating the CBM perturbation in protomer B as stabilizing the off state leads to more activation, so that the effect of the mutation cannot be explained without an interaction between the CBM in protomer B and the G protein. To reconcile these effects in the model, we assumed that protomer B and the G protein bind in a state-independent way (the G protein's state independent binding is represented by in the AIM), and modeled the CBM mutation effect as further decreasing state-independent binding. We find that if is increased from 1 to 2, allosteric efficacy is reduced (see Figure 8D). The finding that state-independent interactions between the G protein and CBMs on both protomer A and protomer B are required for activation is in full agreement with the structural model of the dimer as presented [39], in which not only protomer A, but also IL2 and H8 from protomer B interact with the G protein directly. As this structural information was not used in the construction of the AIMs, the prediction from the allosteric model underscores the ability of the AIMs-based approach in this illustration to connect the representation of allostery with the structural context of the modeled biomolecular systems.
Conclusions
We have explored models of biomolecular allostery through the use of Allosteric Ising Models (AIMs) in order to develop a quantitative theoretical description that bridges the features of the structural components and their interactions, to the thermodynamic allosteric parameters. From this perspective, we show first that the allosteric efficacy is the steady state signal-to-noise ratio for the ligand signal through the corresponding noisy receptor. We find that the allosteric efficacy, or the corresponding effective interaction energy, between two allosterically coupled sites can be expressed in terms of the conformational and interaction energies of the constituent parts for many small systems and interaction motifs. This formulation allows us to show that the allosteric efficacy is the product of the indirect allosteric efficacies through independent pathways, suggesting a mechanism by which biomolecular systems have evolved to be robust to mutation. While the equations were derived here using the Ising model to make use of symmetries in the potential energy function, we show that the model can produce good estimates of the allosteric properties of non-Ising two-state pairwise interaction models as well.
A general inference from the use of AIMs as discussed here is that the results can suggest some constraints on the design principles of allosteric proteins. Thus, we find that it is more efficient and more robust to use multiple parallel channels that are individually weak than to use a single series channel that is strong, and that interactions between the parallel channels can additionally increase allosteric efficacy. From a structural perspective it is possible to surmise that α-helices behave as strong serial channels, where as β-sheets behave more like coupled parallel channels that are individually weak. Indeed, it has been shown that significant long-distance correlations exist in β-sheets [45], but little work has been done to study the connection of the properties of these fundamental units of protein structure to their involvement in known allosteric mechanisms. Understanding the allosteric properties of such structural components and common structural motifs from the perspective shown here offers valuable insight into how the wide array of allosteric proteins observed in nature could have been obtained from the limited number of amino acids and folding motifs.
The illustration of the application of AIMs to the D2 receptor homodimer was successful in producing an allosteric model that predicted structural details of molecular interactions. However, it is important to note that the AIM framework assumes that structural components within biomolecular systems exhibit two-state behavior. While this assumption has been used widely in the study of GPCRs and transporters (e.g., the proposed "rocking bundle" mechanism [46,47]), experimental and computational studies indicate [30,[48][49][50][51][52][53][54] that the character of the conformational space sampled by these molecular machines is not strictly two-state as is often assumed. The principles demonstrated in this manuscript are not mathematically transferable directly to models where structural components require representation by: i)-more than two discrete states, or ii)-continuous states in one or more dimensions. The study of the more complex systems necessitates a more general approach such as the N-body information theoretical analysis we have previously developed [18,55]. We have used such an N-body Information Theory (NbIT) analysis to identify allosteric channels and collective behavior in both transporters [18] and GPCRs [55]. To address the more complex properties of large allosteric systems such as the complex biomolecules responsible for cell function, it may be necessary to formulate a generalization of the NbIT model that allows arbitrary allosteric systems to be constructed and explored in the manner in which the AIMs were analyzed here. Thermodynamic cycle of a two-state ligand/receptor activation reaction. The receptor (blue circle) has an on and an off state (square and triangle indentations, respectively), both of which can bind a ligand (red triangle). The kinetic parameters are shown for the two equilibria of interest. The effective interaction energy through serial channels. Effective interaction energies of the first and last components of one-dimensional Ising chains are plotted as a function of chain length for conditional allosteric efficacy values of 10 (black), 100 (blue), 1000 (purple) 10,000 (red) and 100,000 (orange). The inset shows detail for short chain lengths. The effective interaction energy is seen to decay exponentially with channel length. Using the Ising model to estimate effective interaction energies in non-Ising threecomponent/two-state systems. The exact effective interaction energies of 100,000 threecomponent/two-state non-Ising systems are plotted against the effective interaction energy estimated using the equations derived for the three-component Ising model (see (42)). The systems are generated using energy terms sampled from a normal distribution of mean 0 and standard deviation of 1/β (A), 3/β (B), and 5/β (C) and the points are plotted with 10% opacity. Calculated mutual information between the channel and allosteric sites sets a lower bound on the allosteric efficacy. The symmetric uncertainty between the two components is plotted against the absolute effective interaction energy for 100,000 two-component/two-state non-Ising models (A), and two-component Ising models (B). The systems are generated using energy terms sampled from a normal distribution of mean 0 and standard deviation of 1/β, and the points are plotted with 10% opacity. Relation of effective interaction energies in non-Ising two-state systems with multiple independent channels to estimates from the corresponding Ising model. The exact effective interaction energies of 100,000 two-state non-Ising system is plotted against the effective interaction energy estimated using the equations derived for the n-channel Ising model (Equation (51)) for two (A), and three (B) independent channels. The systems are generated using energy terms sampled from a normal distribution of mean 0 and standard deviation of 1/β, and the points are plotted with 10% opacity. The effective interaction energy of a two-channel AIM as a function of the interaction energy between the channels. (A): The two-channel system in which each channel contributes to positive allosteric modulation is shown for a ligand that interacts with one channel (blue) or both channels (black). (B): A two-channel system with one positive allosteric channel and one negative allosteric channel is shown for a ligand that interacts only with the positive channel (blue), only with the negative channel (red), or both channels (black). The effect of interactions between channels is seen to modify significantly the allosteric signal transduction. | 9,578.8 | 2015-05-01T00:00:00.000 | [
"Physics"
] |
α-Asarone blocks 7β-hydroxycholesterol-exposed macrophage injury through blocking elF2α phosphorylation and prompting beclin-1-dependent autophagy
Macrophage apoptosis is salient in advanced atherosclerotic lesions and is induced by several stimuli including endoplasmic reticulum (ER) stress. This study examined that a-asarone present in purple perilla abrogated macrophage injury caused by oxysterols via ER stress- and autophagy-mediated mechanisms. Nontoxic a-asarone at 1-20 M attenuated 7β-hydroxycholesterol-induced activation of eukaryotic initiation factor 2a in macrophages leading to C/EBP homologous protein (CHOP) expression and apoptosis due to sustained ER stress. The a-asarone treatment increased the formation of autophagolysosomes localizing in perinuclear regions of 7β-hydroxycholesterol-exposed macrophages. Consistently, this compound promoted the induction of the key autophagic proteins of beclin-1, vacuolar protein sorting 34 and p150 responsible for vesicle nucleation, and prompted the conversion of microtubule-associated protein 1A/1B-light chain 3 and the induction of p62, neighbor of BRCA1 and autophagy-related (Atg) 12-Atg5-Atg16L conjugate involved in phagophore expansion and autophagosome formation. Additionally, a-asarone increased ER phosphorylation of bcl-2 facilitating beclin-1 entry to autophagic process. Furthermore, the deletion of Atg5 or beclin-1 gene enhanced apoptotic CHOP induction. Collectively, a-asarone-stimulated autophagy may be potential multi-targeted therapeutic avenues in treating ER stress-associated macrophage apoptosis.
INTRODUCTION
Autophagy is a catabolic process of selfdegradation of cytoplasmic constituents and organelles in the autophagolysosomes [1,2]. Many autophagyrelated genes (Atg) have been identified as components required for optimal autophagic functions [3,4].
Stepwise autophagic process starts with the engulfment of organelles or portions in the cytoplasm by formed isolation membrane phagophores, which subsequently sequesters cytoplasmic materials into a double-membrane autophagosomes [5]. The class III phosphatidyinositol-3 kinase (PI3K) complex with vacuolar protein sorting 34 (Vps34) mediates the nucleation of the phagophore membrane [3,4]. This nucleation is blocked by bcl-2 through binding to beclin-1, a component composing PI3K complex. Phagophore membrane elongation and autophagosome formation requires both Atg12-Atg5-Atg16L and microtubule-associated protein 1 light chain 3 (LC3)-phosphatidylethanolamine (PE) conjugates via two ubiquitin-like conjugation pathways, along with membrane-bound Atg9. Eventually, these autophagosomes fuse with lysosomes in aid of the LC3-phosphatidylethanolamine conjugate, ultimately leading to formation of the autolysosome. Upon vesicle completion, most of the Atg proteins are dissociated from the autophagosome, allowing autophagosome-lysosome fusion and autophagic cargo degradation [3,4,5].
Malfunction of autophagy has been implicated in a variety of diseases and pathologies, including cancer, neurodegeneration, aging and infectious diseases [2,5,6]. Autophagy is induced in response to stressors such as nutrient starvation, growth factor deprivation, organelle damage, and endoplasmic reticulum (ER) stress for the maintenance of cellular homeostasis and cell survival [2,7]. Accordingly, pathogenic alterations in the autophagic Research Paper: Pathology www.impactjournals.com/oncotarget machinery have emerged as key targets in the development of novel therapeutic strategies. Pharmacologic agents and molecules have been shown to be capable of influencing distinct steps of autophagic process and targeting the regulatory mechanisms of autophagy [8]. Accordingly, pharmacological approaches to influence autophagic pathways are currently receiving considerable attention for therapy in diseases [9].
Disruption in the normal function of the ER results in cell stress response known as the unfolded protein response (UPR) in response to the accumulation of unfolded proteins [10,11]. The UPR induces the synthesis of chaperones and protein components for the folding of ER client proteins, initially aiming at compensating for cell injury [12]. When the ER stress is extensive or sustained and the function of ER cannot be restored, the UPR can eventually prompt cell death [13]. There are accumulating data indicating that ER stress is a potent trigger of autophagy [14,15,16]. The signaling pathways governing autophagy and the cellular consequences in response to ER stress have been emphasized for the treatment of the numerous diseases related to ER stress [11,[15][16][17]. However, the physiological and pathological relevance of ER stress-induced autophagy remain puzzling. Emerging data show that ER stress-induced autophagy counterbalances the ER expansion, removes protein aggregates or mutant proteins from the ER [14,18]. For instance, enhanced autophagy in rapamycin-treated cells reduces the accumulation of the mutant proteins in the ER [19][20][21]. Accordingly, pharmacological agents enhancing autophagy may display therapeutic possibilities for their clinical exploitation, while autophagy inhibition is being suggested as a strategy for treating some cancers [9].
Natural polyphenolic compounds found in diet, such as genistein, quercetin, curcumin, and resveratrol, can trigger autophagy-associated cell death in cancer through influencing autophagic machinery at various stages [22]. Epigallocatechin-3-gallate (EGCG), a green tea polyphenol, stimulates hepatic autophagy and lipid clearance, possibly contributing to beneficial effects in reducing hepatosteatosis [23]. In addition, this compound reduces ectopic lipid accumulation in vascular endothelial cells through stimulating a mechanism involving autophagy [24]. The current study investigated that α-asarone stimulated autophagy mediated by ER stress in 7β-hydroxycholesterol-exposed macrophages. α-Asarone ( Figure 1A) is a component of certain essential oils found in herbal plants and has been isolated in purple perilla extracts [25]. This compound has been shown to be neuroprotective in mice and to reduce LDL cholesterol levels in rats [26,27]. Our previous study showed that α-asarone inhibited 7β-hydroxycholesterolinduced macrophage apoptosis through blocking ER stress-specific signaling involving caspase activation and C/EBP homologous protein (CHOP) induction [28]. This study examined the involvement of eukaryotic initiation factor 2α (elF2α)-CHOP-growth arrest and DNA damage-inducible protein 34 (GADD34) pathway in oxysterol-triggered beclin-1 activation leading to autophagolysosome formation.
Inhibition of eIF2α phosphorylation and GADD34 expression by α-asarone
ER stress promotes eIF2α activation leading to inhibition of protein synthesis, and induces GADD34 protein in cells experiencing environmental and metabolic stress [29]. The current study examined whether the oxysterol 7β-hydroxycholesterol induced the eIF2α phosphorylation and GADD34 expression in macrophages, which was inhibited by non-toxic α-asarone. As shown in Figure 1C, 28 μM 7β-hydroxycholesterol activated eIF2α and induced GADD34 expression in a temporal manner, evidenced by western blot analysis. The eIF2α phosphorylation was elevated from 4 h after the exposure of macrophages to 7β-hydroxycholesterol and reached considerably high at 18-24 h post-exposure ( Figure 1C). The protein GADD34 induction was shortly increased at 12-18 h after the challenge of 7β-hydroxycholesterol to macrophages. On the contrary, the eIF2α phosphorylation and GADD34 induction were inhibited by treating macrophages with α-asarone ( Figure 1D). Therefore, these results indicate that 7β-hydroxycholesterol induced ER stress leading to the elevation of eIF2α phosphorylation and subsequent GADD34 expression, which was attenuated by submicromolar α-asarone.
Autophagolysosome formation by α-asarone
Emerging evidence suggests that ER stress is a potent inducer of autophagy contributing to cell survival and the disturbance of autophagy rendered cells vulnerable to ER stress [14,15]. The current study investigated that 7β-hydroxycholesterol stimulated autophagosome maturation, as evidenced by staining with MDC for autophagosomal vacuoles. When 7β-hydroxycholesterolexposed macrophages were visualized with a fluorescence microscope, autophagic vacuoles such as autophagosomes stained by MDC appeared as distinct green dot-like structures distributed within the cytoplasm or localizing in the perinuclear regions ( Figure 2A). There was an increase in the number of MDC-labeled vesicles at 18 h after 20 μM α-asarone treatment, indicating an induction of autophagosome maturation by α-asarone. This study further examined whether α-asarone activated autophagosomal lysosomal fusion by double staining with MDC and the lysosome red stain lysotracker ( Figure 2B). There was lack of green MDC staining in untreated controls, whereas autophgophore vacuoles and lysosomes were colocalized in 7β-hydroxycholesterolexposed macrophages. When α-asarone was supplied to these cells, much stronger yellow staining with a punctate pattern was observed ( Figure 2B). These results indicate that α-asarone promoted autophagolysosome formation by 7β-hydroxycholesterol.
Promotion of autophagy initiation by α-asarone
Beclin-1 is a mammalian ortholog of yeast Atg6 and a core component of the autophagy machinery as a part of type III phosphatidylinositol 3 (PI3) kinase complex that is required for initiating the autophagic vacuole formation, so called as vesicle nucleation [30]. Western blot analysis revealed that the beclin-1 levels were gradually increased from 1 h after 7β-hydroxycholesterol treatment and sustained high up to 24 h ( Figure 3A). When ≥10 μM α-asarone was treated to 7β-hydroxycholesterolexposed macrophages, the beclin-1 induction was further enhanced, indicating that α-asarone accelerated autophagy process induced by oxysterol ( Figure 3B). In addition, the induction of Vsp34 and p150 proteins increased in a similar manner ( Figure 3B). Accordingly, α-asarone improved autophagic nucleation in oxysterol-stimulated macrophages.
There is growing evidence that at the ER beclin-1 interacts with anti-apoptotic bcl-2 family proteins, hence attenuating autophagy activity [31]. The disruption of beclin-1-Vps34 complexes can be achieved by bcl-2 phosphorylation by c-Jun N-terminal kinase 1 or beclin-1 phosphorylaion by death-associated protein kinase [31]. The bcl-2 activation was considerably enhanced from 6 h after the treatment with 7β-hydroxycholesterol ( Figure Figure 1: α-Asarone chemical structure A., macrophage cytotoxicity by α-asarone B., temporal responses of eIF2α expression and activation, and GADD34 induction to 7β-hydroxycholesterol C. and their inhibition by α-asarone D. J774A1 macrophages were incubated with 28 μM 7β-hydroxycholesterol up to 24 h. Macrophage viability (mean ± SEM, n = 5) was measured by using MTT assay and expressed as percent cell survival relative to glucose controls B. Cells were lysed, electrophoresed on 12% SDS-PAGE and subject to western blot analysis with a primary antibody against eIF2α, phospho-eIF2α or GADD34 C. Macrophages were incubated with 28 μM 7β-hydroxycholesterol in the absence and presence of 1-20 μM α-asarone for 18 h D. β-Actin protein was used as an internal control. The bar graphs (mean ± SEM, n = 3) in the bottom panels represent quantitative results obtained from a densitometer. Values in bar graphs not sharing a letter indicate significant different at P < 0.05. 4A). Unexpectedly, ≥10 μM α-asarone inhibited cellular phosphorylation of bcl-2 in 7β-hydroxycholesterolexposed macrophages ( Figure 4B). This study further examined whether α-asarone activated autophagy through blocking the binding of beclin-1 and bcl-2 in the ER of 7β-hydroxycholesterol-treated macrophages. 7β-Hydroxycholesterol declined the bcl-2 level in ER, while 20 μM α-asarone restored the induction, possibly increasing anti-apoptotic activity ( Figure 4C). The bcl-2 phosphorylation increased in the ER-enriched extracts in response to α-asarone in a similar manner to beclin-1 induction ( Figure 4C). These results indicate that this compound boosted the autophagic activity through relieving bcl-2-mediated repression of beclin-1-Vps34 complexes in 7β-hydroxycholesterol-exposed macrophages.
Elevation of LC3 lipidation and p62/sequestosome 1 (SQSTM1) induction by α-asarone
LC3 is localized in autophagosome membranes and is considered as a molecular marker of phagophores and autophagosomes. The cytoplasmic form of LC3I is converted into autophagosome membrane-bound lipidated form of LC3II that is correlated with the extent of autophagosome formation [32]. Using western blot analysis with a LC3 antibody, this study examined the LC3I and LC3II modification in macrophages after treatment with 7β-hydroxycholesterol for 24 h. The LC3II conversion was significantly elevated from 2 h after treatment with 7β-hydroxycholesterol, with a sustained effect occurred up to 18 h ( Figure 5A). When 1-20 μM α-asarone was treated to macrophages exposed to 7β-hydroxycholesterol for 18 h, a further increase in the levels of LC3II protein was apparently observed ( Figure 5B). p62/SQSTM1 binds autophagosomal membrane protein LC3, bringing SQSTM1-containing protein aggregates to the autophagosomes to be degraded [33]. The p62/SQSTM1 and neighbor of BRCA1 (NBR1) proteins promote autophagic degradation of ubiquitinated targets. This study examined the induction of p62/SQSTM1 and NBR1 in 7β-hydroxycholesterol-challenged macrophages. 7β-Hydroxycholesterol induced p62/SQSTM1 and NBR1 in a similar manner to LC3 induction ( Figure 5A). Such induction was significantly potentiated by α-asarone ( Figure 5B). Thus, α-asarone may enhance the specific its inhibition by α-asarone B., and beclin-1 induction and bcl-2 phosphorylation in ER C. J774A1 macrophages were incubated with 28 μM 7β-hydroxycholesterol up to 24 h A. Cells were exposed to 28 μM 7β-hydroxycholesterol in the absence and presence of 1-20 μM α-asarone for 18 h B. The ER-enriched extracts were obtained by using a commercial kit C. Cell lysates and ER extracts were electrophoresed on 8% SDS-PAGE, followed by western blot analysis with a primary antibody against phospho-bcl-2, beclin-1 or bcl-2. β-Actin and calnexin proteins were used as internal controls. The bar graphs (mean ± SEM, n = 3) in the bottom panels represent quantitative results obtained from a densitometer. Values in bar graphs not sharing a letter indicate significant different at P < 0.05.
Figure 3: Western blot analysis showing temporal responses of beclin-1 induction to 7β-hydroxycholesterol A. and
potentiation of becin-1, Vps34 and p150 by α-asarone B. J774A1 macrophages were incubated with 28 μM 7β-hydroxycholesterol up to 24 h. Cells were lysed and electrophoresed on 8% SDS-PAGE, followed by western blot analysis with a primary antibody against beclin-1, Vps34 or p150. Cells were exposed to 28 μM 7β-hydroxycholesterol in the absence and presence of 1-20 μM α-asarone for 18 h B. β-Actin protein was used as an internal control. The bar graphs (mean ± SEM, n = 3) in the bottom panels represent quantitative results obtained from a densitometer. Values in bar graphs not sharing a letter indicate significant different at P < 0.05. interaction among LC3, p62/SQSTM1 and NBR1 for the formation and the degradation of polyubiquitin-containing bodies by autophagy.
This study attempted to examine the association between autophagy and human LC3 protein during 7β-hydroxycholesterol-or α-asarone-induced autophagy in macrophages. RT-PCR analysis showed that the transcriptional variant of the LC3A gene LC3Av1, one of human LC3 gene family, was promptly activated at the transcriptional level in 7β-hydroxycholesterol-challenged macrophages ( Figure 6A). However, its activation was further elevated due to treatment of macrophages with 20 μM α-asarone, confirmed by RT-PCR and real-time PCR analyses ( Figure 6B and 6C).
α-Asarone potentiation of Atg protein induction by 7β-hydroxycholesterol
The process of autophagy can be divided into four steps such as induction, nucleation, vesicle expansion and closure, and autolysosome formation, which is regulated by the co-ordinated action of a number of Atg proteins [34]. The expansion of the developing autophagosomes is mediated by the Atg12-Atg5-Atg16L complex in an ubiquitin-like conjugation reaction [35]. Western blot data showed that 7β-hydroxycholesterol highly enhanced the protein levels of Atg5 and Atg16L in macrophages from 8 h up to 18 h after its treatment ( Figure 7A). In addition, the level of Atg12-Atg5 conjugate was elevated in a similar fashion to Atg5. The levels of Atg5, Atg12-Atg5 conjugate and Atg16L were further enhanced in α-asarone-treated macrophages ( Figure 7B). These results indicate that α-asarone augmented the expansion of phagophores by promoting the formation of Atg12-Atg5-Atg16L complex.
LC3 is cleaved by Atg4 and bound to a lipid moiety of phosphatidylethanolamine by Atg3 and Atg7 to produce LC3II in another ubiquitin-like reaction [32,35,36]. When ≥1 μM α-asarone was treated to macrophages, the induction of Atg3 and Atg7 by 7β-hydroxycholesterol was potentiated in a dose-dependent manner ( Figure 8A). Accordingly, α-asarone may facilitate the closure of the autophagosome through controlling LC3 lipidation.
The targeted deletion of Atg5 gene in macrophages using Atg5 siRNA increased the CHOP induction by 7β-hydroxycholesterol ( Figure 8B). In addition, the knockout of beclin-1 gene in 7β-hydroxycholesterolchallenged macrophages induced the CHOP expression ( Figure 8C). These results revealed that the inhibition of autophagy enhanced 7β-hydroxycholesterol-induced ER Cells were lysed, electrophoresed on 15% SDS-PAGE, and subject to western blot analysis with a primary antibody against bcl-2, LC3, p62/SQSTM1, or NBR1. Cells were exposed to 28 μM 7β-hydroxycholesterol in the absence and presence of 1-20 μM α-asarone for 18 h B. β-Actin protein was used as an internal control. The bar graphs (mean ± SEM, n = 3) in the bottom panels represent quantitative results obtained from a densitometer. Values in bar graphs not sharing a letter indicate significant different at P < 0.05. www.impactjournals.com/oncotarget 7β-hydroxycholesterol A. and its elevation by α-asarone B. and C. J774A1 cells were incubated with 28 μM 7β-hydroxycholesterol up to 8 h A., and in another set of experiments cells were incubated with 1-20 μM α-asarone and exposed to 28 μM 7β-hydroxycholesterol for 2 h B. and C. The LC3Av1 transcriptional levels were measured by RT-PCR and real-time PCR assays, and β-actin and GAPDH genes were used for the internal control. The bar graphs (mean ± SEM, n = 3) represent LC3Av1/GAPDH ratio. Values in bar graphs not sharing a letter indicate significant different at P < 0.05. www.impactjournals.com/oncotarget macrophages were incubated with 28 μM 7β-hydroxycholesterol in the absence and presence of 1-20 μM α-asarone for various times. Cells were lysed and subject to electrophoresis on 12% SDS-PAGE and western blot analysis with a primary antibody of Atg5, Atg12-Atg5, or Atg16L1. β-Actin was used for the internal control. The bar graphs (mean ± SEM, n = 3) in the bottom panels represent quantitative results obtained from a densitometer. Values in bar graphs not sharing a letter indicate significant different at P < 0.05.
Figure 8: Induction upregulaton of Atg3 and Atg7 by α-asarone A. and effect of Atg5 deletion B. and beclin-1 knockout C.
on CHOP induction. J774A1 macrophages were incubated with 28 μM 7β-hydroxycholesterol in the absence and presence of 1-20 μM α-asarone for various times. Cells were lysed and subject to electrophoresis on 12% SDS-PAGE and western blot analysis with a primary antibody of Atg3 or Atg7. β-Actin was used for the internal control. For the knockout of Atg5 gene or beclin-1 gene B., Atg5 siRNA or beclin-1 siRNA was introduced. The bar graphs (mean ± SEM, n = 3) in the bottom panels represent quantitative results obtained from a densitometer. Values in bar graphs not sharing a letter indicate significant different at P < 0.05. www.impactjournals.com/oncotarget stress leading to macrophage apoptosis.
DISCUSSION
Seven major findings were observed from this study. 1) The temporal response of eIF2α phosphorylation and GADD34 induction was dose-dependently downregulated by treating macrophages with 1-20 μM α-asarone. 2) The α-asarone treatment increased autophagic vacuoles with distinct dot-like punctate structures localizing in the perinuclear regions of 7β-hydroxycholesterol-exposed macrophages.
The UPR is mediated by cellular signals through three transmembrane sensors of protein kinase RNAlike ER kinase (PERK), inositol requiring kinase/ endonuclease-1α (IRE-1α) and activating transcription factor 6 (ATF6). These three canonical response pathways results in the inhibition of misfolded protein translation, and facilitates protein degradation of ER components for normal ER folding setting [37]. There is now ample evidence that ER stress and UPR are chronically activated in atherosclerotic macrophages and endothelial cells [38,39]. In particular, pro-atherogenic effect of prolonged ER stress is the activation of inflammatory pathways in macrophages [37]. Macrophages are vulnerable to lipidinduced toxicity in the setting of metabolic diseases, which leads to drive macrophages toward apoptosis [37]. One investigation shows that CD36-mediated oxidized LDL uptake triggers ER stress response in macrophages, enhancing the foam cell formation [40]. In our previous study, 7β-hydroxycholesterol resulted in ER stress-mediated macrophage apoptosis through pathways involving activation of the ER IRE1α and PERK [28]. In addition, α-asarone prevented ER stress induced-apoptosis by interfering the IRE1α downstream signaling and by disturbing PERK-ATF4 pathway in 7β-hydroxycholesterol-experienced J774A1 macrophages.
The PERK phosphorylates eIF2α and regulates ATF4 transcriptional activity to attenuate protein translation As depicted, α-asarone blocked apoptotic elF2α process and enhanced beclin-1-dependent autophagy responsible for autophagolysosome formation. The symbol ⊗ indicates sites of inhibition manifested by α-asarone, while arrows designate activation. as a defensive mechanism of UPR [41]. Accordingly, the PERK-phospho-eIF2α-ATF4 signaling inhibits the decline of protein synthesis during chronic ER stress by stimulating signaling downstream of the mammalian target of rapamycin complex 1 [42]. One study shows that the eIF2 phosphorylation is involved in polyglutamine 72 repeat aggregates-induced LC3 conversion [43]. The malfolded proteins induced ER stress-mediated cell death through PERK-eIF2α phosphorylation, which was inhibited by autophagy formation involving LC3 conversion and aggregate degradation. This study also showed that 7β-hydroxycholesterol induced both eIF2α phosphorylation and LC3 conversion in macrophages, indicating that this oxysterol generated malfolded proteins leading to ER stress. One can assume that when LC3 conversion and autophagy formation are not enough to diminish malfolded proteins, cells may undergo ER stress-mediated cell death with caspase-12 activation [43]. Congruently, 7β-hydroxycholesterol induced ER stress-mediated macrophage apoptosis with caspase-12 activation [28]. Additionally, α-asarone abrogated ER stress induced-cell death by decreasing the cleavage of caspase-12. Accordingly, α-asarone sufficiently enhanced the LC3 conversion for the autophagy formation enough to reduce formation of malfolded proteins triggered by 7β-hydroxycholesterol.
The autophagic alteration has been considered as a potential therapeutic target for diverse diseases, including neurodegenerative diseases, cancers and infectious diseases [6,9,44]. The implication of autophagy in human diseases has developed small-molecule modulators and pharmacologic agents with distinctive molecular targets in different human pathologies [8]. A variety of therapeutic agents target specific molecular components of the core autophagic machinery. Inhibiting autophagy is a promising approach in cancer therapy, based on evidence that autophagy is a survival-promoting mechanism in cancer cells, whereas the induction of autophagy is associated with the resistance of cancer cells to chemotherapeutic agents [45]. Accumulating evidence demonstrates that the induction of autophagy is a neuroprotective response in the context of neurodegenerative disorders such as Alzheimer disease, Huntington's disease and Parkinson's disease [44,46]. In addition, it is deemed that the activation of autophagy is cardioprotective, whereas excessive autophagy can lead to cell death and cardiac atrophy [47]. Accordingly, the alterations of the key proteins in the core autophagy machinery and upstream regulators represent an attractive therapeutic target for treating diverse diseases.
Recent studies imply that autophagy can be induced by dietary polyphenols such as resveratrol, catechin, oleuropein and curcumin [22,23,[48][49][50]. EGCG reduces intracellular lipid accumulation by stimulating LC3 conversion-associated autophagy through a Ca 2+ / calmodulin-dependent protein kinase kinase β-mediated mechanism [24]. Curcumin induces a beneficial form of autophagy in H 2 O 2 -exposed human vascular endothelial cells via a protective mechanism involving FOXO1, which may be a potential therapeutic avenue for the treatment of oxidative stress-related cardiovascular diseases [51]. Resveratrol induces autophagy in human dermal fibroblasts through regulating death-associated protein kinase 1, confirming the beneficial effects of resveratrol on autophagy in skin [52]. This stilbene suppresses autophagy-induced apoptosis in human U251 glioma cells [49]. Similarly, in the current study α-asarone increased autophagy induced by 7β-hydroxycholesterol in macrophages through negatively influencing eIF2-GADD34-CHOP-dependent mechanism, suggesting the therapeutic effects of α-asarone on the inhibition of macrophage cell death. In addition, α-asarone enhanced the bcl-2 phosphorylation in ER of oxysterol-treated macrophages, which appeared to hamper the binding of beclin-1 stimulating autophagic activity. In our previous study, α-asarone dampened 7β-hydroxycholesterolinduced macrophage apoptosis through blocking ER stress-specific signaling involving caspase-12 activation [27]. Taken together, α-asarone may be an antiatherosclerotic multi-targeted agent antagonizing eIF2α-GADD34-CHOP-mediated macrophage apoptosis and concurrently inducing pro-survival autophagy involving beclin-1 signaling pathway.
CONCLUSIONS
The current report demonstrated that α-asarone abrogated 7β-hydroxycholesterol-triggered eIF2α-CHOP activation, and enhanced macrophage autophagy through up-regulating autophagolysosome formation. α-Asarone boosted the beclin-1-Vps34-p150 induction for the phagophore elongation and the LC3 conversion for the membrane lipidation promoted by 7β-hydroxycholesterol, both required for the expansion of phagophores to form autophagosomes. Although α-asarone may serve as an effective in stimulating macrophage autophagy to degrade malfolded proteins in ER possibly generated by oxysterols, animal and clinical studies are required to investigate the in vivo effectiveness of α-asarone.
Cell culture
Mouse macrophage cell line J774A1 was obtained from American Type Culture Collection (ATCC; Manassas, VA) and grown in DMEM supplemented with 10% FBS at 37ºC in a humidified atmosphere of 5% CO 2 in air. Murine macrophages were treated with 1-20 μM α-asarone and exposed to 28 μM 7β-hydroxycholesterol for various times. In culture experiments, J774A1 macrophages were incubated in DMEM supplemented with 0.4% fatty acid-free bovine serum albumin (BSA).
The cytotoxicity of α-asarone was determined by using MTT (3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltertrazolium bromide) assay. Cells treated with α-asarone for 24 h were incubated with 1 mg/ml MTT solution at 37°C for 3 h, resulting in the formation of insoluble purple formazan product that was dissolved in 250 µl isopropanol. Optical density was measured using a microplate reader at wavelength 570 nm. This study found that α-asarone at the doses of 1-20 μM had no cytotoxicity ( Figure 1B). The current experiments employed α-asarone in the range of 1-20 μM.
ER isolation
After macrophages were treated with 28 μM 7β-hydroxycholesterol and 20 μM α-asarone, the ER isolation was conducted with a commercial ER Enrichment kit (Novus Biological, Littleton, CO), according to the procedure suggested by the manufacture. Briefly, cells were lysed with hypotonic extraction buffer [10 mM HEPES (pH 7.8), 25 mM KCl, 1 mM EGTA] and centrifuged at 600 g for 5 min. Pellets were lysed with isotonic extraction buffer [10 mM HEPES (pH 7.8), 250 mM sucrose, 25 mM KCl, 1 mM EGTA], followed by homogenization. After centrifugation at 10,000 g for 30 min, the supernatants were rough ER fraction. The ER fractions were subject to western blot analysis for the detection of specific proteins.
Western blot analysis
Following culture protocols, J774A1 cells were extracted in a lysis buffer containing with 10 mg/ml β-glycerophosphate, 0.1 M Na 3 VO 4 , 0.5 M NaF and protease inhibitor cocktail. Equal amounts of protein were electrophoresed on 8-15% SDS-PAGE and transferred onto a nitrocellulose membrane. After blocking with 5% nonfat skim milk or 3% BSA for 3 h at room temperature, the membranes were incubated with polyclonal or monoclonal antibodies of eIF2α, phospho-eIF2α, GADD34, beclin-1, Vps34, p150, Atg5, Atg12-Atg5, Atg16L1, p62/SQSTM1, NBR1, LC3, Atg7, Atg3, and CHOP for overnight at 4ºC. After three times of washing with Tris buffered salinetween 20 buffer, the membranes were incubated with anti-rabbit or anti-mouse IgG conjugated to HRP for 1 h at room temperature. The individual protein level was detected by Immobilon Western Chemiluminescent HRP substrate (Millipore, Billerica, MA). For the internal control, the membranes were incubated with β-actin antibody (Sigma Aldrich Chemicals). After the performing immunoblot analysis, the blot bands were visualized on Agfa X-ray film (Agfa-Gevaert, Belgium), developing signals with X-ray developer and fixer (Duksan, Seoul, Korea).
RT-PCR and real-time PCR analyses
Total RNA was extracted from J774A1 macrophage lysed for 5 min using a commercial Trizol reagent kit (Molecular Probes Inc., Cincinnati, OH). The complementary DNA was synthesized using 5 μg of total RNA with 200 units of reverse transcriptase (Promega Corp. Madison, WI) and 0.5 mg/ml oligo-(dT) 15 primer (Bioneer, Daejeon, Korea). www.impactjournals.com/oncotarget
Atg5 small interfering RNA (siRNA) transfection
For the deletion of Atg5 gene, a major biomarker of autophagy, Atg5 siRNA transfection assay was conducted with J774A1 cells using a commercial lipofectamine 3000 mixture (Life Technologies). Cells were incubated with 5 μg Atg5 siRNA (Thermo Scientific, Waltham, MA) and lipofectamine 3000 mixture for 4 h at 37ºC. After the transfection, J774A1 cells were treated with 28 μM 7β-hydroxycholesterol for 18 h. Subsequently, cells were extracted in a lysis buffer and western blot analysis was performed with Atg5 antibody for the detection of the Atg5 deletion.
Statistical analysis
The results are presented as means ± SEM. Statistical analysis was conducted using the SAS software package version 6.12 (SAS Institute, Cary, NC). Oneway ANOVA was used to determine boosting effect of α-asarone on 7β-hydroxycholesterol induced-autophagy in macrophages. Differences among treatment groups were analyzed with Duncan's multiple range test and considered significant at P < 0.05. | 6,174 | 2017-01-09T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Smart-Watches Assisted Sugar Level Monitoring with Different Activities and Nutrition based on Machine Learning Approaches
ABSTRACT
Introduction
These days, diabetes is a complex disease in the human body, but in simple words, it is a lifestyle where body glucose can rise due to different activities (e.g., eating, sitting, and nutrition) [1].There are many ways to detect the sugar level in the body such as non-invasive glucose monitoring devices and smart watches which take different biomarkers from the human body.The bio-makers are blood samples, blood pressure, and CGM (Continuous Glucose Monitor) System Transmitter Sensor.Therefore, the role of the Internet of Things (IoT) has been gaining a lot of population to monitor the glucose level in the human body with the assistance of CGM [2].Nutrition has a lot of impact on body glucose when the human eats different diets that impact on body glucose [3].Therefore, the role of nutrition and technologies for body glucose is very critical to avoid any loss from this diabetes disease.Recently, the role of artificial intelligence and machine learning has emerged where these approaches assist and predict the glucose level in humans with different biomarkers [4].
These type-2 diabetes detection based on machine learning widely investigated in these studies [5][6][7][8][9][10].These studies investigated diabetes body glucose based on different biomarkers such as blood pressure, nutrition, obesity, glucose, retinopathy, and other factors.Machine learning approaches such as support vector machines, random forests, convolutional neural networks, LSTM, gradient boost, and decision trees are widely exploited to monitor and predict type 2 diabetes reasons in the human body.However, many research challenges exist in these methods and approaches to living well-being lives with diabetes and pre-diabetes in practice.
These are research questions; we are considering in this paper.(i) The existing diabetes detection and glucose monitoring methods did not consider the different human behaviors such as activities (e.g., sitting, moving, running, walking, and sleeping).Therefore, the new methods should consider these activities which are beneficial for humans with diabetes.(ii) The existing methods did not consider the direct relationship between nutrition and glucose levels in the body.Therefore, these aspects are widely ignored in existing research.
In this paper, we are suggesting an Internet of Things Assisted Sugar Level Monitoring Framework on Different Biomarkers based on Machine Learning Approaches.The objective is to monitor the glucose level of the body during different activities.Meanwhile, we predict the glucose level during eating of different nutrition in the body.With the objectives, the paper makes the following contributions.
(a).This study presents the IoT-assisted glucose monitoring framework that consists of different biomarkers and related parameters.
(b).We present the machine learning approaches to predict the classify the diabetes ratio in the human body.
(c).We present the dataset and simulation code at the end of the paper for further analysis and research.
The paper is organized in the following way.Section 2 is about related work.Section 3 is about methodology.Section 4 is about simulation results and discussion.Section 5 is about the conclusion and future work.
Related Work
Many studies suggested different diabetes and glucose monitoring frameworks in practice which are deployed and implemented in different laboratories.The glucose monitoring machines such as smart watches were introduced to collect real-time data from the human body during different time intervals.In study [1] non-invasive characterization of glycosuria and identification of biomarkers in diabetic urine using fluorescence spectroscopy and machine learning algorithm is presented for diabetes patients.These bio-makers such as blood pressure and glycosuria are identified at different research laboratories where diabetes prediction and classification are evaluated based on machine learning approaches.Study [2] presented a machine learning approach for the electrochemiluminescence-based point-of-care testing device to detect multiple biomarkers.Sensors and Actuators based on IoT are presented for diabetes patients.The biomarkers are blood pressure, age, obesity, and body mass index correlated and predict diabetes among patients.Impact of nutritional factors in blood glucose prediction in type 1 diabetes through machine learning and big data and machine learning to tackle diabetes management investigated in these studies [3,4].These studies focused on different nutrition based on their calories which directly affect the body glucose in human bodies.These studies predict and classify the glucose level at different time intervals.
Studies [5][6][7][8][9][10] focused on type-2 diabetes with different bio-makers and nutrition with different intakes in body behaviours.The Prediction of type 2 diabetes mellitus using hematological factors based on machine learning approaches: a cohort study analysis.These studies exploited different machine learning approaches such as random forest, decision tree, k-means, and convolutional neural networks are widely implemented to predict diabetes that have different cooperated dependent variables.The random datasets considered consisted of different biomarker data values with different age groups of people.
Studies [11][12][13][14][15][16][17][18][19][20] suggested CGM IoT-enabled smart watches assisted glucose monitoring frameworks.The data is offloaded and analyzed at different hospital servers for processing.These works supported glucose monitoring in the ubiquitous environment, where patients can monitor their glucose health during eating and work at their offices and homes.The IoT CGM is a non-invasive device that is available in terms of smart-watches and all users can buy and use them during their activities.Ongoing Glucose Monitoring (OGM) is a technological tool for overseeing diabetes through the continuous observation of real-time blood glucose levels day and night.Provided here is a summary of essential elements linked to ongoing Glucose Monitoring.These studies integrated OGM for diabetes patients to provide continuous and detailed insights into their blood glucose levels.It assists in making informed decisions regarding insulin doses, dietary selections, and lifestyle.Systems for ongoing glucose monitoring consist of a small sensor placed under the skin, typically on the abdomen.The sensor measures glucose levels in the interstitial fluid (the fluid surrounding the cells) and transmits the data to a monitor or smartphone.Continuous glucose monitoring provides immediate data on glucose levels, typically displaying the information every few minutes.This enables users to observe trends, recognize patterns, and promptly respond to elevated or diminished glucose levels.OGM devices often come with flexible notifications and alarms to notify users when their glucose levels are excessively high or low.This feature is particularly useful for preventing severe hypoglycemia or hyperglycemia.Some OGM systems can be integrated with insulin pumps, forming a closed-loop system.This allows automated adjustments to insulin distribution based on real-time glucose data, providing a more meticulous and dynamic approach to diabetes oversight.Users can analyze OGM data to understand their glucose trends over time.This data is invaluable for healthcare practitioners to make adjustments to treatment plans during routine examinations.OGM has demonstrated effectiveness in contributing to improved glycemic control and reducing HbA1c levels in individuals with diabetes.It provides a more comprehensive portrayal of glucose fluctuations compared to traditional self-monitoring methods.Continuous glucose monitoring reduces the need for frequent fingerstick tests, presenting a more convenient and less intrusive technique for glucose oversight.Despite its advantages, OGM technology may encounter obstacles such as cost, precision of devices, and user adherence.Additionally, users must periodically calibrate certain OGM systems with conventional blood glucose measurements.The technology for ongoing glucose monitoring continues to advance.Progress includes improved sensor precision, extended wear durations, smaller and more comfortable devices, and integration with emerging technologies like artificial intelligence for predictive analytics.OGM has significantly improved diabetes management by offering a more comprehensive understanding of glucose dynamics, enhancing treatment decisions, and ultimately improving the quality of life for individuals with diabetes.
To the best of our knowledge, the Internet of Things Assisted Sugar Level Monitoring Framework with different activities on Different Biomarkers based on Machine Learning Approaches has not been studied yet.We are solving the incremental glucose monitoring problem with many biomarkers, activities, and approaches based on different diversity of generated data from different patients.Therefore, we are very different from existing studies and try to solve the aforementioned research questions in our work.
Proposed Sugar Glucose Monitoring Algorithm Framework
In this paper, we present the activity and nutrition-assisted sugar glucose level monitoring algorithm framework that consists of different components as shown in Figure 1.The algorithm components consisted of smartwatches glucose monitoring data, and nutrition data while subjects performed their activities in daily life practice.We consider that each subject exploits CGM during eating, running, walking, and different activities and collect their data at the runtime.The data has only a numeric form based on set threshold values and is stored in the dataset with extension CSV.The nutrition can be anything with different calories eaten by the subject, then we observe the sugar level with the assistance of CGM smartwatches.We present the ASA algorithm framework that consists of different sub-schemes as shown in Figure 1.
We denoted the ASA algorithm schemes in algorithmic as shown in Algorithm 1.It works like a flowchart and defines the hierarchy of data processing from input to decision.Algorithm 1 takes the input as a dataset CSV file which was collected from different CGM IoT devices and nutrition data.We exploited this dataset as real-time data which consisted of different features.The features are defined in the following way: Subject, Age, sugar glucose, Activity, Body mass index (BMI), diabetes, and blood pressure.We pre-process the data when it is divided into a features matrix and remove all null and unnecessary values from the data.We conducted this process on the machine which is high processing computing machine.We consider the constraints such as accuracy, F1-score, precision, and recall values for the prediction and classification of sugar glucose in different subjects.We set these hyper-parameters on different algorithms such as the K-Nearest Neighbours Algorithm (KNN), Support Vector Classifier (SVC), Decision tree (DT), Gaussian Naive Bayes (GNB), Random Forest (RF), and Gradient Boost (GB) are implemented to predict and classify the data based on given hyperparameters.We implemented these algorithms along with ASA to predict the diabetes and sugar glucose level in different subjects during their activities in practice.We executed all data features based on their extracted features and made decisions based on their probability values.The prediction and result are the last phase of the algorithm as shown in Algorithm We show the implementation of the algorithm in the experimental part, where different visual analytics and statistical methods are applied to show the results from different perspectives.We determined the prediction and result on the following metrics.4) shows the accuracy of the generated results which is consisted of TP, FP False negative (FN), and True negative (TN).We measured these metrics in the performance evaluation and discussed them in the result analysis.We monitor the sugar glucose monitoring of different subjects with the different parameters such as age, glucose, walking, sitting, sleeping, running, blood pressure, nutrition, BMI and calories consumption as shown in Figure 2. We analyzed and monitored this kind of process with different subjects when they were wearing smart which generated the values of sugar glucose level, and blood pressure while a subjects eating, running, walking, and sitting in their daily lives.Therefore, our framework shows the prototype where we can monitor the glucose relationship with the different variables as shown in Figure 2.
Based on collected data from smart watches (CGM), the values are different diabetes and nondiabetes subjects.Therefore, the impact of nutrition, activities on glucose levels could be changed.Therefore, we predict the result based on the subject health such as whether either subject is diabetic or non-diabetic as shown in the confusion matric We show the glucose level results based on different cell results that were collected with the smartwatches such as glucose level, healthy blood cells, and blood circulation with the different activities.In Figure 4, the first aspect shows that a subject is walking and eating nutrition and manageable sugar level in the body, where blue dots are less and lower as compared to healthy cells and body circulation.In the second aspect, a subject is eating and sitting, we can observe the ratio of sugar and glucose level is increased at the upper level when a subject is not doing any activity.When a subject runs in different time intervals, the body's sugar and glucose level always remain stable as compared to not doing any activity as shown in aspects 3 and aspects 4 in Figure 4. Figure 5 shows that all algorithms can reach different predictions, when the subjects performed the different activities and eating with activity and eating without activity in different time intervals.Almost, all algorithms obtained good results, ASA obtained a better result that is 0.77 as compared to all algorithms.However, it is our initial prototype and empirical work, there still accuracy needs to be improved more in future works.
Conclusions
We presented the modified dataset with an additional feature such as sugar glucose level with different activities (e.g., running, sitting, sleeping, and walking) while eating different nutrition at various time intervals.We presented empirical machine learning, such as an activity glucose monitoring algorithm (ASA), which executed all datasets with more optimal results.Simulation results showed that our proposed framework was more optimal and displayed glucose monitoring with different activities and more features compared to existing smartwatches.We obtained an accuracy of 78% compared to existing machine-learning methods.In the result analysis, we showed the different aspects of activities that are useful for the subjects to monitor their glucose level through different smartwatch features at work.
In future work, we will improve the accuracy of the ASA algorithm and consider many other factors such as glucose controlling and monitoring in a parallel way during time intervals in the optimal way.Author Contributions S.M. designed this paper, writing, methodology, software, data analysis, and experiments.
Funding
For this manuscript, I am working individually, therefore, I have no funding for this paper.
Data Availability Statement
In this paper, we put the data and code on the public repository that is GitHub and available at the following link: https://github.com/Sajida-memon/Activity-and-Nutrition-Glucose-Monitoring/tree/main.The code and dataset consisted of a source where we exploited the dataset and code and improved it according to our considered problem as shown in Figure 1.
Fig. 1 .
Fig. 1.Smart-Watches Assisted Sugar Level Monitoring with Different Activities and Nutrition based on Machine Learning Approaches shows the precision that is true positive (TP) and false positive (FP) on the generated results.shows the recall that is true positive (TP) and false negative (FN) on the generated results.shows the F1-score that is dependent upon the recall and precision of the generated results.
Figure 3 .
The values are determined based on equations (1-4) with the true positive and false positive and true negative and false negative values as shown in Figure3label.
Fig. 4 .
Fig. 4. Glucose Level with Activities and Nutrition for Patients.
Table 2
Result from Discussion with Different Metrics | 3,322.2 | 2024-04-21T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
The DEAD-Box Protein DP103 (Ddx20 or Gemin-3) Represses Orphan Nuclear Receptor Activity via SUMO Modification
ABSTRACT Structural analysis of nuclear receptor subfamily V orphan nuclear receptors suggests that ligand-independent mechanisms must regulate this subclass of receptors. Here, we report that steroidogenic factor 1 (SF-1) and liver receptor homolog 1 are repressed via posttranslational SUMO modification at conserved lysines within the hinge domain. Indeed, mutating these lysines or adding the SUMO isopeptidase SENP1 dramatically increased both native and Gal4-chimera receptor activities. The mechanism by which SUMO conjugation attenuates SF-1 activity was found to be largely histone deacetylase independent and was unaffected by the AF2 corepressor Dax1. Instead, our data suggest that SUMO-mediated repression involves direct interaction of the DEAD-box protein DP103 with sumoylated SF-1. Of potential E3-SUMO ligase candidates, PIASy and PIASxα strongly promoted SF-1 sumoylation, and addition of DP103 enhanced both PIAS-dependent receptor sumoylation and SF-1 relocalization to discrete nuclear bodies. Taken together, we propose that DEAD-box RNA helicases are directly coupled to transcriptional repression by protein sumoylation.
Steroidogenic factor 1 (SF-1) and liver receptor homolog 1 (LRH-1) are two closely related transcription factors belonging to the nuclear receptor subfamily V (NR5A) that contain a highly conserved DNA binding domain (DBD), a large hinge domain and a ligand binding domain (LBD) (Fig. 1A). Drosophila melanogaster Ftz-F1 is the founding member of this subfamily and interacts directly with the pair-rule gene product of Ftz to control parasegmentation at early embryonic stages (25). The mammalian orthologs SF-1 and LRH-1 are also critical in tissue development and organogenesis (19,27,33). During development, SF-1 is essential for male differentiation, adrenogonadal morphogenesis, and terminal differentiation of the ventromedial hypothalamus, and in the adult, this receptor regulates genes involved in steroid biosynthesis and endocrine signaling (34,44). Although SF-1 null mice die at birth from adrenal failure, SF-1 heterozygous mice live. However, further analyses of these heterozygous mice show that despite seemingly adequate levels of SF-1, the amount of active SF-1 protein is insufficient to overcome defects in adrenal morphogenesis (2,3). In humans, SF-1 haploinsufficiency is associated with severe adrenal disease and gonadal dysgenesis (1,28). LRH-1 acts far earlier in development than SF-1, as evidenced by the embryonic lethality observed in LRH-1 null embryos (33). In vitro and in vivo analyses have implicated LRH-1 in bile acid homeostasis (13,26), where a heterozygous phenotype has also emerged in the intestine (4). In addition, LRH-1 controls tissue conversion of androgens to estrogen by regulating aromatase gene expression (7,17) Despite the fact that the high-resolution crystal structure of LRH-1 revealed a large hydrophobic pocket within the LBD (38), natural ligands have yet to emerge for this subclass of receptors. As such, the question of how subfamily V receptors are regulated is unclear. In many cellular contexts, this subclass of receptors is active and presumably recruits coactivators in a ligand-independent manner. NR5A receptor activity depends on two distinct regions in the LBD, an activation function in helix 1 and the C-terminal AF2 domain (8,20). In both SF-1 and LRH-1, a repression domain has been identified in the hinge region (32,47). For SF-1, this domain is reported to interact with the DEAD-box RNA helicase DP103 (Ddx20 or Gemin-3) (49), although the precise mechanism of SF-1 repression by DP103 is unknown.
Phosphorylation and sumoylation are posttranslational modifications known to modulate nuclear receptors. Phosphorylation of SF-1 is proposed to increase receptor activity by stabilization of the LBD and enhanced cofactor recruitment (8,11,15). On the other hand, sumoylation of transcription factors, such as Elk-1, Lef1, and nearly all steroid nuclear receptors, results in their transcriptional repression (5,18,35,39,42,50). Sumoylation occurs at canonical motifs of KXE, where is a hydrophobic amino acid and K is the acceptor lysine for covalent attachment of the small ubiquitin-like modifier (SUMO). SF-1, LRH-1, and other invertebrate NR5 receptors are predicted to be sumoylated given the presence of a conserved IKSE or I/VKQE site in the hinge region (Fig. 1A). SUMO modification of proteins is analogous to ubiquitination, involving a three-step ATPdependent reaction. Processed SUMO protein is loaded onto the heterodimeric E1 enzyme (SAE1/SAE2) and transferred from E1 to the sole E2 enzyme Ubc9, which then mediates SUMO conjugation to the protein substrate with aid from E3-SUMO ligases. Protein inhibitor of activated stats (PIAS) proteins comprise the largest of three identified E3-SUMO ligase classes (29). This protein conjugation is dynamic and easily reversed by Sentrin/SUMO-specific proteases (SENP/SUSP), which cleave SUMO from its substrate. However unlike ubiquitin conjugation, which primarily facilitates protein degradation, SUMO modification of transcription factors often results in transcriptional repression. Others have proposed that this repression involves direct recruitment of histone deacetylases (HDACs) (40,51) or a relocalization of the SUMO-marked protein to promyelocytic leukemia protein (PML) nuclear bodies (9,39).
Here we identify sumoylation as an important posttranslational regulatory mechanism for dampening the activity of subfamily V nuclear receptors. Potential mechanisms for sumoylation-mediated repression were investigated and found to involve a functional interaction between the receptor and the DEAD-box RNA helicase DP103. for Drosophila (dm) Ftz-F1 and mouse SF-1 and LRH-1 are shown, with SUMO sites (S) and phosphorylation sites (P) indicated. The repression domain is also shown (R, black square). (B) An anti-HA Western blot of COS-7 lysates is shown after transfection with HA-epitope tagged SF-1 or LRH-1 and SUMO1 or GFP-SUMO1. The slower-migrating forms of each receptor are indicated (arrowheads), and all lysates were prepared in the presence of NEM, an inhibitor of SUMO isopeptidases. (C) Western blots are shown for Y1 whole-cell lysates treated with (ϩ) or without (Ϫ) 20 mM NEM. Protein was detected with an anti-SF-1 antibody. Upshifted SF-1 after NEM treatment is indicated by an arrowhead. (D) An anti-HA Western blot of COS-7 cells is shown for empty vector control (pCI), the HA-SF-1 wild type, and lysine mutants with sumoylated SF-1 (arrowhead) and nonsumoylated SF-1 (SF-1) indicated; SUMO1 was coexpressed in all conditions. A control immunoblot for SUMO1 is shown below. (E) An anti-SUMO1 Western blot of HA-immunoprecipitated lysates from COS-7 cells transfected with the wild type or lysine mutants of SF-1 is shown, with sumoylated SF-1 (arrowhead) and nonspecific bands (NS) indicated. A control immunoblot for HA-SF-1 expression is shown below. One microgram of each plasmid was added for all transfections. (F) In vitro sumoylation of the in vitro-transcribed and -translated 35 S-labeled wild-type and lysine mutants of SF-1 (1 l) was carried out as described in Materials and Methods. Unmodified SF-1 (SF-1) and sumoylated SF-1 (arrowheads) are indicated. ␣, anti; IP, immunoprecipitation; WB, Western blotting. Recombinant protein expression, in vitro sumoylation assay, and GST pull downs. Recombinant His 6 -hSUMO1 (aa 1 to 97) was expressed and purified by TALON chromatography (Clontech). Recombinant His 6 -hE1 (SAE1/SAE2) and His 6 -hUbc9 were obtained commercially (LAE Biotech). In vitro-transcribed and -translated 35 S-SF-1 and variants thereof were produced (Promega) and incubated with 150 ng of E1, 750 ng of His 6 -Ubc9, and 900 ng of His 6 -SUMO1 in 50 mM Tris (pH 7.6), 5 mM MgCl 2, 1 mM dithiothreitol, and 2.5 mM ATP at 37°C for 1.5 h, and the reaction was stopped by boiling in protein loading buffer. Samples were subjected to SDS-PAGE followed by autoradiography. Glutathione S-transferase (GST) pull-down assays were carried out with 35 S-SF-1 or variants thereof and purified GST-C-terminal hDP103 as described previously (15,21).
Chromatin immunoprecipitation assay. HeLa luciferase reporter (Stratagene) cells containing an integrated promoter-reporter of five Gal4 binding sites fused to the luciferase gene were electroporated with pCI-Neo and HA-tagged pGal-SF-1 constructs (4 g). The method used follows that described in reference 46, with PCR conditions of 25 cycles at 95°C for 30 s, 53°C for 1 min, and 72°C for 1 min and by using primers described previously (40) to amplify a 5Ј 330-bp region of luciferase cDNA.
Subfamily V receptors are sumoylated in the hinge region.
Although sumoylation is known to repress steroid receptor activity, this modification has not been investigated for socalled orphan nuclear receptors, which can function in a ligand-independent manner. In a modified one-hybrid yeast screen for SF-1 protein partners, we identified Ubc9 or the E2 SUMO conjugating enzyme as a strong interacting protein (data not shown). We next sought to determine whether SF-1 and LRH-1 could be sumoylated. Indeed, sequence analysis of all vertebrate species of SF-1 and LRH-1 revealed two highly conserved canonical sumoylation motifs at the N-and C-terminal hinge regions, while insect Ftz-F1 variants contained one site in the N-terminal hinge region (Fig. 1A).
Sumoylation of both SF-1 and LRH-1 was demonstrated in a cellular system, as evidenced by slower-migrating bands after coexpression of receptor with either SUMO1 or GFP-SUMO1 (Fig. 1B). In addition, a similar slower-migrating SF-1 species was detected in NEM-treated lysates made from both Y1 and ␣T3 cells ( Fig. 1C and data not shown), suggesting that endogenous SF-1 is sumoylated. Further analysis revealed that Lys194 served as the major acceptor lysine for SF-1 sumoylation, as evidenced by the loss of the slower-migrating band with the single mutation K194R and double mutation (K119R and K194R, referred to as 2KR) but not with K119R (Fig. 1D) results for SF-1 are similar to those of other recent reports (6,22). The identity of these slower-migrating SF-1 species as sumoylated receptors was confirmed by immunoprecipitation of HA epitope-tagged SF-1, followed by Western blotting with an anti-SUMO1 antibody (Fig. 1E), and as predicted, no sumoylated species were observed with K194R or 2KR mutant proteins. These results were confirmed in an in vitro sumoylation assay, with Lys194 identified as a major site and Lys119 presumed to be a minor sumoylation site (Fig. 1F). Amounts of sumoylated SF-1 diminish in both the K194R and 2KR mutants; the faint residual upshifted band observed in the 2KR variant imply that a minor third site can be sumoylated in vitro. Taken together, we conclude that subfamily V receptors are sumoylated in vivo and in vitro.
Sumoylation of SF-1 attenuates transcriptional activity.
Previous studies identified a regulatory domain which when mutated led to increased receptor activity; this domain contained the major sumoylation site for SF-1 and LRH-1 ( Fig. 1A) (32,47). Consistent with these reports, we found increased activity of NR5A promoter reporters with either SF-1 or LRH-1 sumoylation mutants ( Fig. 2A). Increased receptor activity observed with both the K194R and 2KR receptor mutants was not due to increased protein stability, as judged by results from pulse-chase metabolic labeling experiments (Fig. 2B). Gal4-SF-1/LRH-1 fusion receptors containing the full hinge and LBD also showed a dramatic increase in activity following mutation of the sumoylation acceptor sites. Strikingly, the single mutant K194R was at least 70-fold more active than the wild type, and mutation of both sumoylation sites (2KR) resulted in greater than 300-fold activation (Fig. 2C, left panel). While K119R exhibited comparable activation to that of the wild type, the double mutant at both Lys119 and Lys194 showed remarkable synergism; this is consistent with Lys119 as a minor site. Similar to native receptors, Gal4-SF-1 and Gal4-K119R are efficiently sumoylated, whereas Gal4-K194R and Gal4-2KR exhibit no detectable sumoylation (Fig. 2C, left lower panel). Nearly identical results were observed for Gal4-LRH-1 constructs, where double mutation of K213R and K289R in the hinge region led to strong receptor activation (Fig. 2C, right panel).
To confirm that receptor sumoylation served to repress SF-1 activity, we asked whether removing the SUMO conjugate from SF-1 with the SUMO isopeptidase SENP1 would yield similar results, as observed with the SF-1 lysine mutants. Indeed, coexpression of SENP1 with SF-1 and SUMO1 resulted in a marked attenuation of sumoylated SF-1 (Fig. 3A). Furthermore, activities of both the wild type and the K119R mutant were enhanced after the addition of small amounts of SENP1 expression vector (25 ng), reaching levels observed with the K194R mutant (Fig. 3B, left panel). Addition of SENP1 failed to activate the 2KR variant, providing further evidence that Lys119 and Lys194 are the sites of sumoylation (Fig. 3B, right panel). Collectively, our data suggest that Lys194 plays a dominant role in mediating repression of SF-1 via sumoylation and that receptor sumoylation represents a major silencing mechanism.
A DEAD-box protein mediates repression via SF-1 sumoylation. The mechanisms by which protein sumoylation leads to transcriptional repression are diverse. Recent literature suggests that repression by sumoylation involves (i) nuclear relo-calization with a concomitant decrease of promoter occupancy or (ii) direct recruitment of HDACs. Therefore, we asked whether sumoylation mutants differ in their subnuclear localization. Both GFP-wild type and GFP-SUMO mutants yielded nearly identical patterns of nuclear localization (Fig. 4A). Consistent with these results, no apparent differences were noted in the promoter occupancy of Gal4-wild type compared to the K194R mutant as judged by chromatin immunoprecipitation (ChIP) results with a HeLa cell line containing a stably integrated Gal4 reporter (Fig. 4B). We next asked whether SF-1 sumoylation promotes recruitment of HDACs by using the class I and II HDAC inhibitors, trichostatin A (TSA) and sodium butyrate (NaBT). If HDAC recruitment is essential for SUMO-mediated repression, mutating the major sumoylation sites within SF-1 should prevent derepression by TSA or NaBT. Instead, addition of TSA or NaBT led to a dramatic increase in the activity of all receptor variants ( Fig. 4C and D). Our results differ from those recently shown for Elk-1, where loss of sumoylation eliminates TSA sensitivity (51), and thus, we suggest that repression of SF-1 via sumoylation is largely HDAC independent.
For subfamily V, two types of repressors have been identified. The first includes the orphan nuclear receptors Dax1 and SHP, which interfere with the AF2 in the LBD. The second is the RNA helicase DEAD-box protein DP103 (32). Indeed, while Dax1 was able to repress the Gal4-K194R mutant as effectively as Gal4-wild type (Fig. 5A, left panel), DP103 was ineffective at repressing the Gal4-K194R and 2KR mutants (Fig. 5A, right panel, and data not shown). Moreover, addition of SENP1 failed to abolish Dax1-mediated repression of SF-1 (Fig. 5B, left panel). In contrast, addition of SENP1 completely eliminated DP103-mediated repression of Gal4-SF-1 (Fig. 5B, right panel). Our work contrasts a recent report showing no difference between DP103-mediated repression in wild-type and K194R (22). This discrepancy may reflect a difference in cell types or the significantly greater amounts of DP103 used compared to experiments shown here. Nonetheless, our data agree with those reported by Ou and colleagues showing Lys194 to be essential for DP103 repression of SF-1 (32).
To test the hypothesis that sumoylation at Lys194 allows DP103 to function as a repressor, interaction between DP103 and sumoylated SF-1 was explored by direct binding assays. As shown previously, only the C-terminal half of DP103 interacts with SF-1 (32). Mutation of Lys194 and/or Lys119 did not result in an appreciable loss of binding, suggesting that Lys194 is not the sole determinant for DP103 interaction with SF-1 (Fig. 5C). Furthermore, DP103 is able to interact efficiently with in vitro sumoylated forms of SF-1 (Fig. 5D). These results provide evidence that the DEAD-box protein DP103 interacts with sumoylated SF-1 and directly participates in receptor repression.
DP103 promotes PIAS-dependent sumoylation and subnuclear relocalization of SF-1. To further explore how DP103 may affect SF-1 activity, we first defined the optimal E3-SUMO ligase in vivo. One of the defining characteristics of an E3-SUMO ligase is its ability to interact with and promote sumoylation of a given substrate. In both the yeast and mammalian two-hybrid assays, SF-1 interacted strongly with PIAS1 and less well with PIASx␣ and PIASy ( Fig. 6A and B). However, despite this strong interaction, PIAS1 does not serve as an effi- cient E3-SUMO ligase for SF-1 in vivo. In a survey of four PIAS members, only PIASx␣ and PIASy promoted SF-1 sumoylation in a dose-dependent manner; this effect was not observed for PIAS1 or PIAS3 (Fig. 6C, left panel). In contrast to results from the in vitro assay, overexpression of PIAS proteins in vivo does not reveal detectable sumoylation at noncanonical sites, as evidenced by the 2KR mutant (Fig. 6C, right panel, and data not shown). Interestingly, mutating the major phosphorylation site of SF-1 adjacent to Lys194 (S203A) had no effect on receptor sumoylation (Fig. 6C, right panel). Next, the functional effects of overexpressing PIAS proteins on wild-type and 2KR receptors were determined. Consistent with Other promoter-luciferase reporters used in HepG2 cells were the 3-hydroxysteroid dehydrogenase promoter (3HSD Luc, Ϫ153/ϩ2 bp), a synthetic promoter containing tandem SF-1 response elements from the mouse Müllerian inhibiting substance promoter (2XRE MIS), and the StAR promoter (StAR Luc, Ϫ966/ϩ1); 250 ng of each promoter was used. (B) The stability of wild-type (WT) and lysine mutant (K194R or 2KR) SF-1 proteins in COS-7 cells was determined after metabolic labeling, followed by a chase for 0, 2, 5, and 12 h. An autoradiogram of immunoprecipitated HA proteins from whole-cell lysates is shown, with phosphorimaging data graphed as the percentage of labeled protein remaining after each chase period; levels of protein at time zero were taken to be 100%. (C) Transcriptional activity is shown for the Gal4-SF-1 wild type (pGalWT, aa 105 to 462, 25 ng) or Gal4-SF-1 lysine mutants (pGalK119R, pGalK194R, and pGal2KR; 25 ng) on the Gal4-luciferase reporter (pFR-Luc, 200 ng; Stratagene) in COS-7 cells (left panel). Anti-HA Western blotting shows expression levels of the Gal4-SF-1 WT or KR mutants, with slower-migrating forms of sumoylated Gal4-SF-1 protein indicated (arrowhead). Transcriptional activities of the Gal4-LRH-1 wild type (pGalWT, aa 198 to 560, 25 ng) and lysine mutants (pGalK213R, pGalK289R, and pGal2KR) are shown (right panel). All luciferase activity is expressed as activation over parent vectors: pCI-neo (C) for panels in A and pM (pGal) for panels in C. Hrs, hours; WB, Western blotting.
VOL. 25,2005 SUMO REPRESSION OF NR5A RECEPTORS BY DEAD-BOX PROTEIN PIAS-dependent activation of other nuclear receptors (24), we observed an initial activation phase, followed by repression when PIASx␣ is added to the wild-type receptor (Fig. 6D). Addition of SUMO1 further enhanced receptor repression, suggesting that increased sumoylation does silence SF-1 activity. In contrast, increased repression was not observed with the double mutant 2KR (Fig. 6E). The global repression observed with increasing amounts of SUMO1 added to either wild-type or mutant receptors most likely reflects the multiple nuclear substrates affected by the sumoylation machinery, including corepressors and coactivators (23).
To determine how sumoylation affects interaction between DP103 and SF-1, the levels of receptor sumoylation were driven by the optimal E3-SUMO ligase PIASy. DP103 interacted with SF-1 in the presence of PIASy but not under basal levels of sumoylation or after addition of SENP1 (Fig. 7A). Surprisingly, DP103 enhanced PIAS-mediated sumoylation (two-to threefold) for all PIAS proteins, except PIAS3 (Fig. 7B). No significant increase in sumoylation was observed with DP103 alone (control). Whether this effect arises from increased ligase activity of PIAS proteins or by protecting sumoylated SF-1 from desumoylation remains to be determined. Finally, we asked whether DP103 would alter the subnuclear localization of SF-1. Although our previous results suggested that the nuclear pattern of SF-1 does not change under basal levels of sumoylation, a dramatic relocalization of GFP-SF-1 was revealed when DP103 was coexpressed with PIASy and SUMO1; two representative cells with prominent nuclear bodies are shown (Fig. 7C). Addition of SUMO1, PIASy, or DP103 alone or a combination of DP103 plus PIAS1, PIASx␣, or PIAS3 showed no SF-1 relocalization (Fig. 7C; data not shown). However, we noted the presence of fine GFP-SF-1 foci in some cells with PIASy alone (Fig. 7C). The ability of DP103 and PIASy to shuttle SF-1 to discrete nuclear bodies does not apparently require SF-1 sumoylation, as evidenced by a speckled pattern after the addition of SENP1 or with the K119R, K194R, and 2KR GFP-SF-1 mutants (Fig. 7C and data not shown). Further analysis revealed colocalization of GFP-SF-1 with PIASy but not with DP103, which localizes to Cajal bodies or gems (Fig. 7D). These GFP-SF-1 nuclear bodies appear distinct from endogenous splicing speckles, as shown by the nonoverlapping patterns between GFP-SF-1 and splicing factor 2 (SF2)ASF. Moreover, these foci do not resemble PML nuclear bodies (PML-NBs), given that we failed to detect obvious PML-NBs in COS-7 cells under our culture conditions with two markers, Sp100 and PML ( Fig. 7D and data not shown). Collectively, our data suggest that DP103 promotes PIAS-mediated sumoylation and, together with PI-ASy, relocalizes SF-1 to discrete nuclear foci. Whether these foci are functionally significant remains to be determined; however, their formation correlates well with optimal receptor sumoylation, suggesting a functional complex between SF-1, PIASy, and DP103.
DISCUSSION
In this study, we report that subfamily V nuclear receptors are sumoylated at evolutionarily conserved sites. As established for other transcription factors, SUMO modification of SF-1 and LRH-1 significantly attenuates transcriptional activity. Mutating the acceptor lysines in both SF-1 and LRH-1 resulted in a more active receptor, and at least in the Gal4 context, the relative increase is reminiscent of ligand-dependent receptor activation. Thus, for subfamily V receptors, the extent of sumoylation represents one mechanism to both regulate and restrain receptor activity. Our data also suggest that sumoylation of the so-called repression domain in SF-1/LRH-1 marks the receptor for repression by the DEAD-box protein DP103. Moreover, this ATPase/RNA helicase was found to enhance PIAS-dependent receptor sumoylation and to promote PIASy-dependent shuttling of SF-1 to discrete nuclear bodies or foci. Subnuclear relocalization of SF-1 correlated strongly with conditions that promote extensive receptor sumoylation, suggesting that physical interactions between SF-1, DP103, and PIASy are linked to transcriptional repression.
Repression of SF-1 via sumoylation. In contrast to the ubiquitously expressed E1 and E2 sumoylation enzymes, most of the known E3-SUMO ligases exhibit restricted expression patterns and therefore may direct tissue-specific sumoylation of protein substrates (48). In considering SF-1 sumoylation, three E3-SUMO ligases (PIASx␣, PIASy, and PIAS1) are all highly expressed in the adult testes (14,48), where SF-1 regulates multiple genes. SF-1 is also needed for male sexual differentiation (37,45), and it is possible that sumoylation of SF-1 is sexually dimorphic during development. Thus, silencing of male-specific genes in the ovary can be partially explained by lowered levels of SF-1 or by the actions of Dax1 (30, 41) but may also involve sumoylation. Interestingly, other factors that function in sexual differentiation, namely Sox9 and WT-1, contain sumoylation sites, and the combinatorial effects of sumoylation may ensure gene silencing in the female. Finally, it is worth considering the in vivo ratio of nonsumoylated to sumoylated receptors. In this regard, SF-1 haploinsufficiency (2, 28) may stem from inadequate SF-1 activity due to a reduction of protein levels coupled with extensive receptor sumoylation. Currently, our studies are limited to a loss-of-function analysis. Attempts to provide SUMO1 in cis to SF-1, as shown for other proteins (18,50), have failed due to the precise excision of SUMO1 in COS-7 cells (L. A. Lebedeva and H. A. Ingraham, unpublished data). Whether SF-1 or LRH-1 sumoylation confers any structural changes to the DBD, hinge, or LBD remains unclear; however, results from our ChIP analysis suggest that sumoylation does not alter the apparent DNA binding of a heterologous DBD. Moreover, given that Dax1-mediated repression of K194R SF-1 mutant is intact, we suggest that no gross conformational changes occur in the LBD of a sumoylation-defective receptor. Further structural analyses are needed and will require an appropriate SUMO-SF-1 chimera or SUMO stably conjugated to SF-1/LRH-1. Although our findings point to a functional role for Lys194 and Lys289 in SF-1 and LRH-1, respectively, the role of the minor sumoylation sites at Lys119 or Lys213 (Fig. 1A) is less apparent. Despite the fact that disumoylated SF-1 is only observed in vivo under conditions that promote efficient sumoylation, our functional analyses show that both the minor and major sumoylation sites act in concert to dampen receptor activity. In this regard, it remains to be established whether an ordered sumoylation of SF-1/LRH-1 occurs.
Recent studies report interdependency between sumoylation and phosphorylation. Mitogen-activated protein kinasemediated phosphorylation of Elk-1 greatly reduced sumoylation at adjacent lysines and led to increased transcriptional activity (50), and phosphorylation of heat shock factor 1 is a prerequisite for stress-induced sumoylation (16). Currently, we find no apparent relationship between phosphorylation of Ser203 and sumoylation of SF-1. Indeed, the phospho-deficient Historically, DEAD-box (Ddx) RNA helicases are associated with splicing, in part because they were initially identified as protein components of the spliceosome (43). However, other functions for Ddx family members have been noted, and there is mount-ing evidence that they function to silence transcription factors, including nuclear receptors, Egr1 to 4, and the Ets-like repressor, METS (12,21,36,49). Additionally, GRTH (Ddx25), which is expressed in the testes, is reported to attenuate expression of SF-1 target genes, including steroidogenic enzymes (10). For DP103 and another DEAD-box protein, DP97, the repression domain has been mapped to the C-terminal region and does not require the N-terminal ATPase/helicase domain characteristic of this gene family (21,36). Attenuation and silencing of transcription are multilayered and multidimensional. So how may Ddx proteins and sumoylation lead to (21). However, our data imply that repression through DP103 is TSA and NaBT insensitive and suggest that repression by Ddx proteins must involve additional mechanisms other than recruitment of class I or II HDACs. In considering other mechanisms, it is possible that DP103 protects SF-1 from desumoylation. This hypothesis is consistent with the observations that DP103 increased PIASdependent SF-1 sumoylation and that additional SENP1 eliminates repression by DP103. The interaction between DP103 and SF-1 remains to be mapped and is likely to involve multiple interfaces based on our finding that Lys194 and/or sumoylation at Lys119/Lys194 is not the sole determinant of this interaction. Another possible scenario is that DP103 represses SF-1 by facilitating PIASy-mediated relocalization of SF-1. However, we noted that sumoylation is dispensable for movement of SF-1 to nuclear bodies; this observation is reminiscent of PIASy-dependent relocalization of both wild-type and sumoylation-defective Lef1 into nuclear bodies that partially overlap with PML-NBs (39). Thus, while sumoylation is not required for subnuclear relocalization of SF-1 (or Lef1), conditions that promote optimal sumoylation do correlate with altered nuclear distribution of SF-1.
Given that DEAD-box proteins are present in both splicing and translational complexes (31), repression may be coupled to transcript processing or translational control. However, studies to date, including ours, have yet to identify a function for the RNA helicase (unwindase) and RNA binding motifs in repression. Indeed, the N-terminal portion of DP103 is dispensable for interaction and repression of SF-1 and METS (21,49) and for relocalization of SF-1 to nuclear bodies (our unpublished data). Further in vitro and in vivo experiments aimed at delin-eating the precise role of sumoylation in DEAD-box-mediated transcriptional repression will be of interest.
ACKNOWLEDGMENTS
We thank D. Morgan for helpful discussions and F. Poulat for sharing unpublished data regarding the PIAS1/SF-1 interaction and for His 6 -hSUMO1-pcDNA3. We also thank D. Pearce, R. Grosschedl, K. Shuai, Y. Sadovsky, and C. Glass for reagents. We especially thank B. Panning and C. de la Cruz for discussion and reagents for immunocytochemistry experiments. All cells were transfected with SUMO1 (100 ng). (D) Subnuclear signals are shown for wild-type GFP-SF-1 (green), and indirect immunofluorescence is shown for T7-PIASy (red) or FLAG-hDP103 (red). Colocalization of GFP-SF-1 and T7-PIASy signals are shown in the merged figure (upper panels), and the endogenous DP103 signals (lower panels) are indicated (arrowheads). Staining for endogenous SF2/ASF (marker for splicing speckles) or Sp100 (marker for PML-NBs) is shown (red). Note that no positive staining is observed for endogenous Sp100. In all conditions, cells were transfected with 100 ng (each) of GFP-SF-1, PIASy, hDP103, and SUMO1. | 6,349 | 2005-03-01T00:00:00.000 | [
"Biology"
] |
An Inexact Feasible Quantum Interior Point Method for Linearly Constrained Quadratic Optimization
Quantum linear system algorithms (QLSAs) have the potential to speed up algorithms that rely on solving linear systems. Interior point methods (IPMs) yield a fundamental family of polynomial-time algorithms for solving optimization problems. IPMs solve a Newton linear system at each iteration to compute the search direction; thus, QLSAs can potentially speed up IPMs. Due to the noise in contemporary quantum computers, quantum-assisted IPMs (QIPMs) only admit an inexact solution to the Newton linear system. Typically, an inexact search direction leads to an infeasible solution, so, to overcome this, we propose an inexact-feasible QIPM (IF-QIPM) for solving linearly constrained quadratic optimization problems. We also apply the algorithm to ℓ1-norm soft margin support vector machine (SVM) problems, and demonstrate that our algorithm enjoys a speedup in the dimension over existing approaches. This complexity bound is better than any existing classical or quantum algorithm that produces a classical solution.
Introduction
Linearly constrained quadratic optimization (LCQO) is defined as optimizing a convex quadratic objective function over a set of linear constraints. Linear optimization is a special case of LCQO that corresponds to the case where the objective function is linear. LCQO has rich theory, algorithms, and applications. Many problems in machine learning can be formulated as LCQO problems, including variants of least square problems and variants of support vector machine training [1,2]. Some important optimization algorithms also have LCQO subproblems, e.g., sequential quadratic programming [1].
The modern age of IPMs was launched by Karmarkar's projective method for linear optimization (LO). Since then, many variants of IPMs have also been applied to nonlinear optimization problems, including LCQO problems [3,4]. Contemporary IPMs progress towards the set of optimal solutions by moving within a neighbourhood of an analytic curve known as the central path. IPMs can be categorized according to whether or not the the sequence of iterates produced by the algorithm satisfies feasibility. Feasible IPMs are initialized with a strictly feasible solution and maintain feasibility in each iteration, whereas infeasible IPMs start from an infeasible interior solution and do not require feasibility to be exactly satisfied at any point of the algorithm. For LCQO problems with n variables, feasible IPMs can produce an -approximate solution using O( √ n log(1/ )) iterations, whereas infeasible IPMs require O(n 2 log(1/ )) IPM iterations to converge to an -approximate solution [5,6].
At each IPM iteration, a linear system needs to be solved to obtain the search direction, called the Newton direction. This so-called Newton linear system is traditionally in the form of the augmented system or the normal equation system. Classically, these linear systems can be solved exactly using Bunch-Parlett factorization if the matrices in the systems are symmetric indefinite [7], or Cholesky factorization if the matrices are symmetric positive definite. Solving the Newton linear systems using direct factorization approaches requires the use of O(n 3 ) arithmetic operations, which suggests that feasible IPMs based on factoring methods cannot exhibit complexity better than O(n 3.5 log(1/ )), whereas, with the partial update, they achieve O(n 3 log(1/ )) arithmetic operation complexity. The linear systems can also be solved inexactly using some inexact methods, e.g., Krylov subspace methods, which may require fewer iterations if the desired accuracy of the solutions to the linear systems is not high. However, inaccurately solving the Newton linear systems (i.e., the inaccuracy of the search directions) may result in the infeasibility of the sequence of solutions generated by IPMs; therefore, they have only been used in infeasible IPMs.
The advent of quantum technology has led to the development of many quantumassisted algorithms for optimization and machine learning applications, such as linear regression [8] and the support vector machine training problem [9]. Following the seminal work on quantum algorithms for solving linear systems of equations [10], researchers have been studying whether QLSAs could yield quantum speedups in classical optimization algorithms. In particular, quantum IPMs (QIPMs) that utilize QLSAs to solve the Newton linear system arising in each iteration have been proposed for LO problems [11,12] and semidefinite optimization problems [13]. To maintain the feasibility of the iterates using quantum subroutines, the authors of [13,14] introduce the so-called orthogonal subspace system (OSS) for SDO and LO problems, and, in particular, demonstrate that a feasible solution to the original Newton system can be recovered from an inexact solution to the OSS. However, linearly constrained quadratic optimization problems, which are fundamental to both optimization and machine learning, have yet to be formally studied in the quantum literature.
In this work, we generalize the OSS for LO problems in [14] to LCQO problems and provide an efficient method for constructing the OSS using a quantum computer. Using the OSS, we can obtain an inexact feasible IPM, solving for the search directions inexactly but maintaining the feasibility of the iterates throughout the process of our IPM. The feasibility of the iterates gives better IPM iteration complexity and the bottleneck becomes solving the linear system, OSS. In particular, we show that a quantum implementation of our algorithm with access to quantum RAM (QRAM) obtains an -approximate solution to a given LCQO problem with worst-case complexity O n,ω, 1 √ n n ω 2 + σ max (Q) κ VAQ + n 2 , whereω = max k ω k , σ max (Q) is the maximum singular value of the Hessian of the objective function and κ VAQ is the condition number of a matrix determined by initial data; see Lemma 3. We also consider the application of 1 -norm soft margin SVM problems, in which case, an -approximate solution is obtained with complexity Here, m is the number of features and n is the number of data points.ω, Q, and κ VAQ are defined similarly from the LCQO formulation of the SVM problem; see Section 4. The dependence on dimension is better than any existing quantum or classical algorithm.
The rest of this paper is organized as follows: in Section 2, we introduce IPMs for LCQO and the OSS system; in Section 3, we discuss how to use quantum algorithms to find the Newton directions and analyze the complexity of our IF-QIPM; in Section 4, we apply our IF-QIPM to the support vector machine problem. Discussions are provided in Section 5, and some technical proofs are moved to the Appendixes A and B.
Preliminaries
In this section, we introduce notations before reviewing the theory of IPMs applied to LCQO, and derive the OSS system for the class of problems.
Notation
Vectors are typically represented by lower-case letters. We write 0 n when referring to the n-dimensional all-zeros vector, and the n-dimensional all-ones vector is denoted by e n . When the dimension is obvious from the context, we may write 0 or e, respectively. Matrices are typically represented with upper-case letters. The identity of dimension n is denoted by I n×n , and 0 n×m represents the n × m-dimensional all-zero matrix, again, dropping these subscripts when the dimension is obvious from the context. For a general n × m-dimensional matrix H, we write H i· to refer to its ith row, and, similarly, denote the jth column by H ·j . For the (i, j)th element of H, we write H ij or H i,j .
For real-valued functions f 1 , f 2 , and f 3 , we write if there exists a positive number k 4 such that f 1 ≤ k 4 f 2 . We write if there exists a positive number k 5 such that f 1 ≤ k 5 f 2 × poly log( f 3 ).
IPMs for LCQO
In this work, LCQO is defined as follows.
Definition 1 (LCQO Problem). For vectors b ∈ R m , c ∈ R n , and matrices A ∈ R m×n and Q ∈ R n×n with rank(A) = m ≤ n and Q symmetric positive semidefinite, we define the primal and dual LCQO problems as: where x ∈ R n is the vector of primal variables, and y ∈ R m , s ∈ R n are vectors of the dual variables. Problem (P) is called the primal problem and (D) is called the dual problem.
Since A is of full row-rank, A does not contain any null rows, and we further make the following assumption on matrix A.
Assumption 1.
Matrix A has no all-zero columns.
Remark 1.
Suppose that A has zero columns. Without a loss of generality, assume that the nth column is all-zero. Introducing a new variable x n+1 , we can rewrite the problem as The new LCQO problem is equivalent to the original one, and contains fewer all-zero columns. Iterating this procedure to eliminate each of the all-zero columns, we obtain a new LCQO problem satisfying Assumption 1 with no more than 2n − m variables and n constraints in the worst case.
Assumption 2.
There exists a solution (x, y, s) ∈ R n × R m × R n such that Ax = b, x > 0, A T y + s − Qx = c, and s > 0.
The set of primal-dual feasible solutions is defined as and, similarly, the set of interior feasible primal-dual solutions is given by By strong duality, the set of optimal solutions can be characterized as P D * := {(x, y, s) ∈ P D : xs = 0}, where xs denotes the Hadamard, i.e., component-wise product of x and s. Let > 0; then, the set of -approximate solutions to Problem (1) can be defined as P D := (x, y, s) ∈ P D : x T s ≤ n . ( Let X and S be diagonal matrices of x and s, respectively. Under Assumption 2, for all µ > 0, the perturbed system of optimality conditions has a unique solution (x(µ), y(µ), s(µ)), and this set of solutions gives rise to the primaldual central path CP := (x, y, s) ∈ P D 0 |x i s i = µ for i ∈ {1, . . . , n}; for µ > 0 .
IPMs apply Newton's method to solve system (3). At each iteration of infeasible IPMs, a candidate solution to the primal-dual LCQO pair in (1) is updated by solving the following linear system to find the Newton direction: where are residuals, and σ ∈ (0, 1) is the barrier reduction parameter. If r p = 0 and r d = 0, then the solution (x, y, s) exactly satisfies primal-dual feasibility. We can also define residuals in different ways as we will show later. Once the Newton direction is found, one can move along the direction but has to stay in a neighbourhood of the central path, which is defined as where θ ∈ (0, 1).
Until relatively recently, inexact solution approaches to solve the Newton linear system (4) had only been utilized in inexact infeasible IPMs (II-IPMs). For LCQO problems, ref. [6] proposes an II-IPM using an iterative method to solve the Newton systems and obtains a worst-case iteration complexity O(n 2 log( 1 )). On the other hand, feasible IPMs for LCQO problems enjoy O( √ n log( 1 )) iteration complexity [15][16][17]. In [5], the author provides a general inexact feasible IPM for LCQO problems but does not discuss how the sequence of iterates could be guaranteed to maintain primal-dual feasibility exactly when using inexact linear system solvers. This is a vital consideration, as the feasible neighborhood of the central path as outlined in (5) is a subset of the primal-dual feasible set; if primal and dual feasibility are not satisfied exactly at any point in the algorithm, the iterates leave this neighborhood and the method fails. Our work fills this gap by using a method inspired by the QIPMs of [13,14].
Orthogonal Subspaces System
Assume that (x, y, s) ∈ P D 0 . To maintain the feasibility of the primal and dual variables, the first two linear equations in system (4) need to be solved with r p = 0 and r d = 0 exactly, which can be guaranteed if ∆x lies in the null space of A, denoted as Null(A), and ∆s = Q∆x − A T ∆y. Accordingly, we can rewrite system (4) by representing ∆x as a linear combination of basis elements of Null(A). To achieve this, we partition A as Then, we construct the following matrix: Matrix V has a full column rank and satisfies AV = 0, i.e., the columns of V span the null space of A. Let ∆x = Vλ, where λ ∈ R n−m is the unknown coefficient vector used to determine ∆x. Subsequently, we can rewrite system (4) by substituting ∆x and ∆s in the third equation as A similar system was proposed and called "Orthogonal Subspaces System" (OSS) in [13,14], and we use the same name in this work. The matrix in the OSS system (6) is of size n × n, and it is nonsingular. Even if the OSS system is solved inexactly, primal and dual feasibility are preserved by computing ∆x = Vλ and ∆s = QVλ − A T ∆y. Thus, we can conclude that any inexactness will only impact the third equation of (4), i.e., r p = 0 and r d = 0. This property of the OSS system is very convenient when analyzing the proposed inexact IPM, and allows us to obtain the best known iteration complexity for IPMs.
Inexact Feasible IPM with QLSAs
In this section, we propose our IF-QIPM for LCQO problems. We begin with the IF-IPM structure introduced by [5] and describe how to quantize it into an IF-QIPM. Then, we analyze the construction of the OSS system and conclude by analyzing the overall complexity of our IF-QIPM.
IF-IPM for LCQO
In [5], the author studies a general conceptual form IF-IPM for QCLO problems by assuming the feasibility of the primal and dual iterates, which induces the following system: where r c = σµe − XSe, with σ ∈ (0, 1) being the reduction factor of the central path parameter µ, i.e., µ new = σµ. When system (7) is solved with r c = σµe − XSe inexactly yielding an error r, if r 2 ≤ δ r c 2 for some δ ∈ (0, 1), the inexact IPM converges to an -approximate solution to Problem (1) in at most O( √ n log(1/ )) iterations. As we mentioned earlier, it is not specified in [5] how to preserve primal and dual feasibility when system (7) is solved inexactly. Thus, it is presently not clear whether one could recover the convergence conditions described in [5] using inexact approaches, which are reliant on the assumption of primal-dual feasibility (see, e.g., system (7)). Now, we present a general procedure of how to solve system (7) inexactly, while the inexactness error occurs only in the third equation of system (7). Let (λ, ∆y) be an inexact solution for system (6) and r be the error at this solution, i.e., The corresponding Newton step Recall that once (λ, ∆y) is determined, then (∆x, ∆s) is also (uniquely) determined. An interesting property is that, if (λ, ∆y) and (∆x, ∆y, ∆s) can be deduced from each other, then the OSS system and system (7) yield the same error term r. Hence, the convergence conditions built upon system (7) can be directly examined using the residual r c and error r of the OSS system. Let OSS be the target accuracy of the OSS system (6), i.e., where (λ * , ∆y * ) is the accurate solution. According to [5], in order to guarantee that the IF-IPM converges, we must have where δ ∈ (0, 1) is a constant parameter. Therefore, to ensure the convergence of the IF-IPM, it suffices to set The IF-IPM is presented in full detail in Algorithm 1. In each iteration, we build and solve system (6) classically. We solve system (6) to the accuracy just introduced above and then compute the feasible Newton step from the inexact solution and take a full Newton step.
In the quantum-assisted IF-IPM, or IF-QIPM, we propose accelerating Step 7 using quantum subroutines. In the next sections, we investigate how to use quantum algorithms to build and solve the OSS system and obtain the Newton direction.
IF-QIPM for LCQO
The pseudocode of our IF-QIPM is presented in Algorithm 2. At each iteration of the IF-QIPM, we construct and solve system (6) and compute the Newton direction using quantum algorithms. To obtain an OSS -approximate solution of system (6), we first block encode system (8); see Appendix A. Then, we use quantum algorithms to solve for an QLSA -approximate solution of system (8). This solution is normalized but we can rescale it to obtain an OSS -approximate solution of system (6). Details are discussed later in this section.
(λ k , ∆y k ) ← solve system (6) with accuracy k OSS quantumly 8: ∆x k = Vλ k and ∆s k = −A T ∆y k 9: Here, θ 0 < 1 and its value will be discussed later. First, we introduce some notations to simplify the OSS system. In the kth iteration of Algorithm 2, let Then, the OSS system can be rewritten as As discussed in [14], to solve the OSS system (6) using quantum algorithms, we can first rewrite it as the normalized Hermitian OSS system To use the QLSAs mentioned earlier, we need to turn the linear system (8) into a quantum linear system using the block encoding introduced in [18]. To this end, we first decompose the coefficient matrix in linear system (8) as where To compute matrix V, we need to find a basis matrix A B of matrix A and we need to compute the inverse matrix A −1 B . Both steps are nontrivial and can be expensive. However, we can reformulate the LCQO problem as follows: In this case, we have an obvious basis A B = I 0 0 I and matrix V can be constructed efficiently Since matrix A has no all-zero rows, matrix V has no all-zero rows either. This property of the reformulation is useful in the analysis of the proposed IF-QIPM but we do not want to build the complexity analysis on the reformulated problem. Thus, without a loss of generality we may make the following assumption.
Assumption 3. Matrix A is of the form A = I A N .
To simplify the analysis, we further assume that the input data are integers. Based on the two assumptions above, we have the following lemma.
where V i· and (A N ) i· are the ith row of V and A N , respectively. Now, we are ready to give θ 0 in our definition of the central path neighborhood; see (5). We set We also define ω k as the maximum of the values of primal variables and dual slack variables in the kth iteration.
Definition 2.
Let (x k , y k , s k ) be a candidate solution for Problem (1); then, As is standard in the literature on quantum algorithms, in this work, we assume access to quantum random access memory (QRAM). Then, Step 7 of Algorithm 2 consists of three parts: (1). use block encoding to build system (8); (2). use QLSAs to solve system (8); (3). use quantum tomography algorithms (QTAs) to extract the classical solution. We use the block-encoding methods introduced in [18] to block-encode linear system (8).
Proposition 1.
In the kth iteration of Algorithm 2, using the block-encoding methods introduced in [18] and the decomposition described in Equations (9) and (10), a -block-encoding of the matrix in system (8) can be implemented efficiently and the complexity will be dominated by the complexity of the QLSA step. Here, QLSA is the accuracy required for the QLSA step and κ M k is the condition number of matrix M k .
Proof. See Appendix A for proof.
Provided access to QRAM, the complexity associated with block encoding the OSS system coefficient matrix and preparing a quantum state encoding the right hand side amounts to polylogarithmic overhead. The cost of these steps is therefore negligible when compared with the complexity contributed by QLSAs and QTAs, so we ignore it here. To bound the total complexity contributed by QLSAs and QTAs, we first need to analyze the accuracy of QLSA characterized by QLSA , the accuracy of QTA characterized by QTA , and their relationship.
In each iteration, we use a QLSA to solve the block-encoded version of system (8) and obtain an QLSA -approximate solution. Then, we use a QTA to extract an QTAapproximate solution from the quantum machine. In the context of QLSAs and QTAs, ifz is an -approximate solution of z, thenz satisfies Observe that this definition of accuracy differs from the concept of -approximate solutions defined in (2). Similar to [12,13], the QLSA we use is proposed by [19] and the QTA we use is proposed by [20]. Following the argument in Section 2 in [12], we can establish the relationship among QLSA , QTA , and k OSS as where k OSS is defined as the 2 norm of the residual when solving system (8) inexactly in the kth iteration. This coefficient is also used to rescale the solution. According to [12], we rescale the normalized solution obtained from QLSA and QTA by to obtain the k OSS -approximate solution for system (6). Here, we did not add superscript to QLSA and QTA , and the reason shall be revealed later. Let 0 k z k be an inexact solution for system (8) in the kth iteration. Then, the norm of residual of system (8), which is k OSS , and the norm of residual of system (6), which is M kzk − r k c 2 , satisfies Recall that the error arising from the OSS system (6) is the same as the error in the full Newton system (7); then, we can directly use the convergence condition in [5], i.e., We can require and it follows that ensures the convergence of the IF-QIPM. The complexities for each step are also available now. Using the QLSA from [19] and QTA from [20], we have the complexity for QLSA and QTA: Since we have QTA = δ 2 and δ ∈ (0, 1) is a constant parameter, we omit QTA in the Big-O notation. Note that the complexity of the block-encoding procedure is dominated by that of QLSA and QTA and thus we ignore the complexity contributed by block encoding. In Step 8, the complexity contributed by computing Newton step from OSS solution is O(n 2 ). The total complexity for the kth iteration of IF-QIPM will be 3.2.1. Bound for ω k / M k F In this section, all of the quantities that we consider are from the kth iteration. For simplicity, we omit the superscript k in this section unless we need it. Using the property of trace, we have Recalling the central path neighborhood that we defined in (5), we define a matrix E such that It is obvious that E is a diagonal matrix and satisfies With this, we can have tr XQVV T S = tr SXQVV T = tr (θµE + µI)QVV T = tr θµEQVV T + tr µQVV T .
For the second term, we know that Q and V T QV are both positive semidefinite. Thus, we can have tr QVV T = tr V T QV ≥ 0 because of the cyclic invariant property of trace. According to the Cauchy-Schwarz inequality, we have Thus, we have where the last inequality holds due to condition (11). Thus, we can bound M F by Since XQVV T QX 0, we have Since X and S are both positive diagonal matrices, we have As we mentioned in the very beginning of this section, at each iteration, ω is indeed ω k , but the superscript is ignored here. Now, we aim to find a bound for µ so we can further bound M 2 F . Since ω is the upper bound for the magnitude of the primal and dual slack variables, we have Recall the definition of matrix E; see (14). Thus, we have Thus, where the last inequality follows from the bound for θ; see (11). Thus, we have
Bound for κ M k
Similar to the previous section, we ignore the superscript k unless we need it. We will start with a general result and then work on the matrix M k . The following lemma is a well-known result regarding condition numbers of matrices and can be proven using Courant-Fischer-Weyl min-max principle [21].
Lemma 2.
For any full row rank matrix P ∈ R m×n and symmetric positive definite matrix D ∈ R n×n , their condition number satisfies Next, we analyze the matrix in the OSS system (8). Specifically, we focus on M T M since we are interested in the spectral property of the OSS system (8). Using the matrix E defined in (14), we have the following decomposition: The second equality holds because as AV = 0 and Q is symmetric. Then, plugging (14) into the first diagonal block of the decomposition we obtained earlier, we have The first two matrices are nonsingular, so we can apply the Lemma 2, and thus we only need to study the middle matrix. Denote the middle matrix by Ψ. Observe that Ψ is almost the same as its counterpart in [14]. Subsequently, we have the following result regarding the spectral property of M k .
where κ VAQ is the condition number of the matrix Putting all of these together, we have the complexity for our IF-QIPM for LCQO problems.
Theorem 1. The IF-QIPM for LCQO problems stops with the final duality gap less than in at most O √ n log(1/ ) IPM iterations and, in each IPM iteration, the Newton direction can be obtained with complexity O n,ω, 1 n ω 2 + σ max (Q) κ VAQ + n 2 , whereω = max k ω k .
Proof. The complexity bound for the IPM iterations comes from the result in [5]. According to (13), the complexity for obtaining the Newton direction is Combining this with the result in Sec. 3.2.1, the bound in Lemma 3, and µ k ≥ , we have O n,ω, 1 nω k κ M k M k F + n 2 = O n,ω, 1 n ω 2 + σ max (Q) κ VAQ + n 2 .
Application in Support Vector Machine Problems
In this section, we discuss how to use our IF-QIPM to solve SVM problems. We show that our algorithm can solve 1 -norm soft margin SVM problems faster than any existing classical or quantum algorithms with respect to dimension.
The ordinary SVM problem works on a linearly separable dataset, in which the data points have binary labels. The ordinary SVM aims to find a hyperplane correctly separating the data points with a maximum margin. However, in practice, the data points are not necessarily linearly separable. To allow for mislabelling, the concept of a soft margin SVM was introduced in [22]. Let {(φ i , ζ i ) ∈ R m × {−1, +1}|i = 1, . . . , n} be the set of data points, Φ be a matrix with the ith column being φ i , and Z be a diagonal matrix with the ith diagonal element being ζ i . The SVM problem with an l 1 -norm soft margin can be formulated as below. min (ξ,w,t)∈R n ×R m ×R Here, (w, t) determines a hyperplane and C is a penalty parameter. In [9], the authors rewrote the SVM problem as a second-order conic optimization (SOCO) problem and used the quantum algorithm that they proposed to solve the resulting SOCO problem. They claim the complexity of their algorithm has O(n 2 ) dependence on the dimension, which is better than any classical algorithm. However, the algorithm in [9] is invalid. Their algorithm is an inexact infeasible-QIPM (II-QIPM), while they used the IPM complexity for the feasible-QIPM, which ignores at least O(n 1.5 ) dependence on n. They also missed the symmetrization of the Newton step, which is necessary for SOCO problems and makes their Newton step invalid.
Aside from [9], some pure quantum algorithms for SVM problems are also proposed. In [23], the authors propose a pure quantum algorithm for SVM problems. They claim the complexity is O(κ 3 eff −3 log(mn)), where κ eff is the condition number of a matrix involving the kernel matrix and is the accuracy. In the worst case, κ eff = O(m). Their complexity is worse than ours regarding the dependence of dimension and accuracy. In addition, their algorithm does not provide classical solutions. Namely, the solution is in the quantum machine and we cannot read or use it in a classical computer. However, our algorithm produces a classical solution.
To convert the problem into standard-form LCQO, we introduce (w + , w − ) ∈ R m + × R m + , (t + , t − ) ∈ R + × R + , and a slack variable ρ ∈ R n + . Then, we can obtain the following formulation: This is a standard-form LCQO problem with non-negative variables (w + , Thus, we can use the proposed IF-QIPM for LCQO problems to solve the 1 -norm soft margin SVM problems and obtain an -approximate solution with complexity O m,n,ω, 1 (m + n) 1.5 ω 2 This dependence on dimension is better than any existing quantum or classical algorithm.
Discussion
In this work, we present an IF-QIPM for LCQO problems by combining the IF-IPM framework proposed in [5] and the OSS system introduced in [14]. Our algorithm has n 1.5 dependence on n, which is better than any existing algorithms for LCQO problems. The dependence on the accuracy is polynomial, which is worse than classic IPMs. Iterative refinement techniques might help to improve the dependence on the accuracy but they are beyond the discussion of this work.
Conflicts of Interest:
The funder had no role in the design of the study; in the writing of the manuscript; or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
Appendix A. Block Encoding of the OSS System
In this section, we ignore the superscript k for simplicity. As described in Equation (9), we first block encode each of the matrices involved in (10). We assume that V, A, S, and X are given and are stored in a quantum accessible data structure (we ignore the complexity to store the classical information into the quantum machine). For the first matrix -block-encoding of M 1 can be implemented efficiently according to Lemma 50 from [18]. The second matrix Then, we can block encode the two matrices first, and then apply a linear combination to obtain M 3 . In fact, a ( Q F , O(poly log(n)), 3 ) -block-encoding of the left matrix can be implemented efficiently according to Lemma 50 from [18] and a (1, O(poly log(n)), 3 ) -block-encoding of the right matrix can be implemented efficiently according to Lemma 48 in [18]. With the state-preparation cost of the linear combination coefficient vector (1, 1) neglected, a ( Q F + 1, O(poly log(n)), ( Q F + 1) 3 ) -block-encoding of M 3 can be implemented efficiently according to Lemma 52 from [18]. The fourth matrix is one-row-sparse and two-columns-sparse. After being scaled by 1 ω , each element of M 4 /ω has an absolute value of at most 1. According to Lemma 48 in [18], a -block-encoding can be implemented efficiently according to Lemma 53 from [18]. For the linear combination M 2 /ω + M 3 M 4 /ω, the cost for the state-preparation of the coefficient vector (1, 1) is negligible and thus a -block-encoding can be implemented efficiently according to Lemma 52 from [18]. For the matrix multiplication of O(poly log(n)), -block-encoding can be implemented efficiently according to Lemma 53 from [18].
Finally, considering that the complexity of the state-preparation of the vector can be neglected, a O(poly log(n)), -block-encoding of the coefficient matrix of system (8) can be implemented efficiently according to Lemma 52 from [18]. We can choose where K depends on the initial data Now, considering that the complexity for all of the block-encoding algorithms that we have used so far has poly-logarithmic dependence on the dimension and accuracy, and that, for i = 1, 2, 3, 4 O poly log( 1 i ) = O(poly log(κ M )), the complexity for block encoding will be dominated by the complexity for QLSA because QLSA has linear dependence on κ M , we can ignore the complexity of block encoding.
Appendix B. Spectral Analysis for Matrix Ψ
In this section, we provide the spectral analysis for the matrix Just like in the previous section, for simplicity, we ignore the superscript k. We can perform the following decomposition: Let us use the following notation: It can be proven that Ψ 1 is positive definite. The majority of the proof of this conclusion comes from the paper [14]. For the reader's convenience, we provide the complete proof here.
Matrix Ψ 1 is a block diagonal matrix, with all four blocks being diagonal matrices. Thus, we can easily compute the eigenvalues using the characteristic polynomial Clearly, det(Ψ 1 − qI) = 0 gives n quadratic equations and each quadratic equation gives two eigenvalues. The two eigenvalues from the ith quadratic equation are Recalling the definition of E in (14), we can write One can verify that the square root always exists because With θ ∈ 0, min 1 3 √ n , 1 4 QVV T F +1 , we have This means that matrix Ψ 1 is positive definite and its eigenvalues coincide with its singular values because Ψ 1 is also real and symmetric. Analogously, we have Thus, the condition number of Ψ satisfies κ(Ψ) ≤ σ max (Ψ 1 ) + σ max (Ψ 2 ) σ min (Ψ 1 ) + σ min (Ψ 2 ) where the last inequality comes from the definition of ω. Since ω 2 ≥ x i s i ≥ (1 − θ)µ, we have κ(Ψ) = O ω 2 (ω 2 + µσ max (Q)) µ 2 .
Using Lemma 2, we can also bound the condition number of matrix M by | 8,083.6 | 2023-01-13T00:00:00.000 | [
"Computer Science"
] |
Unveiling the invisible: mathematical methods for restoring and interpreting illuminated manuscripts
The last 50 years have seen an impressive development of mathematical methods for the analysis and processing of digital images, mostly in the context of photography, biomedical imaging and various forms of engineering. The arts have been mostly overlooked in this process, apart from a few exceptional works in the last 10 years. With the rapid emergence of digitisation in the arts, however, the arts domain is becoming increasingly receptive to digital image processing methods and the importance of paying attention to this therefore increases. In this paper we discuss a range of mathematical methods for digital image restoration and digital visualisation for illuminated manuscripts. The latter provide an interesting opportunity for digital manipulation because they traditionally remain physically untouched. At the same time they also serve as an example for the possibilities mathematics and digital restoration offer as a generic and objective toolkit for the arts. Electronic supplementary material The online version of this article (10.1186/s40494-018-0216-z) contains supplementary material, which is available to authorized users.
Introduction
The digital processing, analysis and archiving of databases and collections in the arts and humanities is becoming increasingly important. This is because of a myriad of possibilities that digitisation opens up that go well beyond the organisation and manipulation of the actual physical objects, allowing, for instance, the creation of digital databases that are searchable with respect to several parameters (keywords), the digital processing and analysis of objects that are non-destructive to the original object, and the application of automated algorithms for sorting newly found objects into existing digital databases by classifying them into pre-defined groups in the database. These possibilities go hand-in-hand with ever-growing advances in data science that are developing mathematical methodology for analysing and processing digital data. A large component of digital data in the arts and humanities is composed of digital images. Despite many developments of mathematical image analysis methods in applications like biomedicine, the physical sciences and various forms of engineering, the arts and humanities have been mostly overlooked as an application in need of bespoke mathematical image analysis methods. Still, a few examples in this context exist and encompass works on forgery detection [1], the digital restoration of paintings with the Ghent Altarpiece [2][3][4][5][6][7] and Van Gogh's Field with Irises [8][9][10] being prominent examples in these efforts, the digitally guided restoration of frescoes as done for the Mantegna frescoes [11,12] and the Neidhart frescoes [13,14], the algorithm-based analysis and classification of texture in paintings [15,16], learned representations of artists' styles and painting techniques [17,18], and multi-modal image registration and colour analysis in paintings [19][20][21][22][23], just to name a few.
In this work we discuss a range of mathematical methods for correcting and enhancing images of illuminated manuscripts. In particular, we consider automated and semi-automated models for digital image restoration based on partial differential equations, exemplar-based image inpainting and osmosis filtering, and their translation to the digital interpretation of illuminated manuscripts. Here, we refer to mathematical image processing as the task of digital image restoration (or reconstruction), that is the digital processing of a given image to correct for its visual imperfections. Generally, this is done with the main intention of producing a final result where imperfections have been corrected in a visually least distracting way. This is the case for several imaging tasks such as image denoising, deblurring and also image inpainting.
Medieval and Renaissance illuminated manuscripts present a particular challenge, but also an opportunity to transform current understanding of European visual culture between the 6th and 16th century. Illuminated manuscripts are the largest and best preserved resource for the study of European painting before 1500. Nevertheless, the images in some manuscripts have been affected by wear-and-tear, degradation over time, iconoclasm, censorship or updating. Unlike the conservation of other painted artefacts, the conservation of illuminated manuscripts preserved in institutional collections is non-invasive, usually restricted to repairs of the binding and of torn parchment or paper, and rarely involves the consolidation of flaking pigments. For the study of illuminated manuscripts, physical restoration and repairs are often disregarded. This minimal approach is due largely to the fact that when compared to wall or easel paintings, the images in illuminated manuscripts are relatively small and their pigment layers are few and very delicate. It is not possible to remove over-painting without damaging or completely removing the original painting beneath. The removal of even the smallest sample or the restoration of even the smallest painted area would constitute a considerable change to the overall image. As a consequence, pigment losses are often not filled in and overpaintings added on top of the superficial layers can often not be removed to reveal the original images. Virtual restoration is thus the only way to recover damaged illuminations, whether by infilling paint losses or by removing over-painted layers or indeed both. Bringing the images as close as possible to their original form would ensure both their accurate scholarly interpretation and their full appreciation by wider audiences. Damaged or inaccurately restored illuminations can lead to the exclusion of seminal works of art from academic debates or to incomplete and misleading interpretations of the dating, origin and artists involved. Preserving the current state of the illuminations in line with conservation ethics, faithful digital restoration would serve as a reliable surrogate for multiple reconstructions, enabling research, teaching and wider appreciation for manuscripts.
The reliable processing of illuminated manuscripts requires a multi-disciplinary collaboration as the current work is based on. In what follows we discuss a range of new adaptive, semi-automated restoration methods that (a) reconstruct image-structures using partial differential equations [13,14,[24][25][26][27][28], (b) mimic the humanexpert behaviour, using texture-and structure patches sampled from the intact part of the illuminated manuscript at hand and integrating them in exemplar-based inpainting approaches [29,30] in order to provide a digital restoration in agreement with the available information and pleasant to the eye (c) exploit infrared imaging data, correlating the visible image content with its traces in the hidden layers of paint [31,32], and (d) create new 3D interpretations of illuminated manuscripts through a new 3D conversion pipeline [33]. The pre-sequel of this work is an article in the exhibition catalogue [32].
Organisation. In "Retrieving missing contents via image inpainting" section we propose a semi-supervised approach for the segmentation of damaged areas of colour accurate images (in the following referred to simply as RGB images) of illuminated manuscripts and for the retrieval of missing information via a two-step image inpainting model. In "Looking through the layers via osmosis filtering" section we consider the mathematical model of image osmosis to integrate super-painted visible image information on a manuscript with hidden infrared ones for looking through the layers of a restoration process. Finally, in "Creating a 3D virtual scene from illuminated manuscripts" section we present a mathematical pipeline to convert a 2D painting into a 3D scene by means of the construction of an appropriate depth map.
Retrieving missing contents via image inpainting
The problem of image inpainting can be described as the task of filling in damaged (or occluded) areas in an image f defined on a rectangular domain by transferring the information available in the intact areas of the image to the damaged areas in the image. Over the last 30 years a large variety of mathematical models solving the image inpainting problem have been proposed, see, e.g., [28,34] for a review. In some of them, image information is transferred into the damaged areas (the so-called inpainting domain, denoted by D in the following) by using local information only, i.e. by means of suitable diffusion and transport processes which interpolate image structures in the immediate vicinity of the boundary of D in the occluded region. Such techniques have been shown to be effective for the transfer of geometric image structures, even in the presence of large damaged areas [28]. However, because of their local nature, such methods do not make use of the entire information contained in the intact image regions. In particular, such methods do not take into account non-local image information in terms of patterns and textures nor image contents located far away of D. For this reason, non-local mathematical models exploiting self-similarities in the whole image have been proposed [29,30,35,36]. Such models operate on image patches rather than single pixels. Small patches inside D are iteratively reconstructed by comparison with patches outside D in a suitable distance. Missing patches are then reconstructed by copy and paste of a closest patch (or its centre pixel) from the intact part of the image. These models have been proven to be impressively effective in a very large variety of applications and rendered computationally feasible in recent years with the well-known PatchMatch algorithm [37].
The first step of any inpainting algorithm is the decomposition of the image domain in damaged and undamaged areas. This is an image segmentation problem, decomposing a given image into its constituting regions, cf. for instance [34]. Its solution may be rendered very hard in the presence of fuzzy and irregular region boundaries and small scale objects.
In the following we describe an algorithm which detects damaged areas in images with possibly large and non-homogeneous missing regions using few examples provided by the user. This is then used as a necessary initial step for the subsequent application of a two-stages inpainting procedure based on total variation inpainting [38] and exemplar-based image inpainting proposed in [36] for the reconstruction of image contents in the images of the illuminated manuscripts in Fig. 1. Our proposed segmentation is semi-supervised since user input is required for training, while the inpainting procedure is fully automated.
Description of the dataset
Our dataset is composed of two manuscripts made by William de Brailes in 1230-1250 and now part of the collection of the Fitzwilliam Museum in Cambridge (UK), see Fig. 1: Last Judgement in Fig. 1a and Christ in Majesty with King David playing the harp in Fig. 1b, of dimension 196x123mm and 213x135mm, respectively. The images are acquired with a Leaf Valeo 22 back utilising a Mamyia RB67 body and the resulting RAW files are processed using Leafs own proprietary software, where distortions and aberrations are corrected. Also, the colour accuracy is provided by using a customized Kodak colour separation guide with grey-scale (Q13 equivalent) and exported in Adobe 98 colour space. The final output results in very large .tif images (about 4008 × 5344 pixels and 47 MB each).
A semi-supervised algorithm for the detection of the damaged areas
For identifying the damaged areas in the image (mainly missing gold leaves) we propose in the following a twostep semi-supervised algorithm. Here, a classical binary segmentation model is used first for the extraction of a small training region as described in "Chan-Vese segmentation" section which subsequently serves as an input for a labelling algorithm which segments the whole inpainting domain based on appropriate intensity-based image features in "Image descriptors: feature extraction" and "A clustering algorithm with training" sections.
Chan-Vese segmentation
In binary image segmentation one seeks to partition an image in two disjoint regions, each characterised by distinctive features. Typically, RGB intensity values are used to describe image contents and mathematical image segmentation methods often compute the required segmented image as the minimiser of an appropriate functional.
Let f be the given image. We seek a binary image u so that where C is a closed curve. In this work, we consider the Chan-Vese segmentation functional for binary image segmentation [39], that is The functional F is minimised for constants c 1 and c 2 and the contour C, i.e. the optimal u of the form (1). Here, µ, ν, 1 , 2 > 0 are positive parameters and int(C), ext(C) denote the inner and the outer part of C, respectively. In (2) the first and second term penalise the length of C and the area of the region inside C, respectively, giving control on the smoothness of C and the size of the regions. The two other terms penalise the discrepancy between the fitting of the piecewise constant u in (1) and the given image f in the interior and exterior of C, respectively. By computing a minimum of (2) one retrieves a binary approximation u of f. Despite being very popular and widely used in applications, the Chan-Vese model and its extensions present intrinsic limitations. Firstly, the segmentation result is strongly dependent on the initialisation: in order to get a good result, the initial condition needs to be chosen within (or sufficiently close to) the domain one aims to segment. Secondly, due to the modelling assumption (1), the Chan-Vese model works well for images whose intensity is locally homogeneous. If this is not the case, the contour curve C may evolve along image information different from the one we want to detect. Images with significant presence of texture, for instance, can exhibit such problems. Furthermore, the model is very sensitive to the length and area parameters µ and ν , which may make the segmentation of very small objects in the image difficult.
For our application, we make use of the Chan-Vese model 1 to segment a sub-region D 1 of D that will serve as a training set for the classification described in the following two subsections. To do that, we ask the user (typically, an expert in the field) simply to click on a few pixels inside the inpainting domain D to identity a candidate initial condition for the segmentation model (1), which is then run to segment the subregion D 1 . In Fig. 2 we show the results of this approach with a superimposed mask of the computed region D 1 for some details cropped from the original images.
Because of the intrinsic limitations of the Chan-Vese approach, we observe that the segmentation result is not satisfactory (see, for instance, the example in the first row of Fig. 2) since it generally detects with high precision only the largest uniform region around the user selection. To detect the whole inpainting domain D in this manner, the user should in principle give many initialisation points, which may be very demanding in the presence of several disconnected and possibly tiny inpainting regions.
For this reason, we proceed differently and make use of a feature-based approach to use the area D 1 as a training region for a clustering algorithm running over the whole set of image pixels. This procedure is described in the next two sections.
Image descriptors: feature extraction
In order to describe the different regions in the image in a distinctive way, we consider intensity-type features. Namely, for every pixel x in the image we apply non-linear colour transformations to compute the HSV (Hue, Saturation, Value), the geometric mean chromaticity GMCR [40], the CIELAB and the CMYK (Cyan, Magenta, Yellow, Key) values (see [41] for more details). a b Fig. 1 Illuminated manuscripts. These two illuminated manuscripts show large and non-homogeneous damaged areas, mainly removal of gold leaves, see "Retrieving missing contents via image inpainting" section for more details Once this is done, we append all these values and store them in a feature vector ψ of the form For our purpose the feature vector (3), essentially based on RGB intensities, rendered precise segmentations. For more general segmentation purposes, one could add texture-based features and, if available, multi-spectral measurements such as infrared IR or ultraviolet UV images.
A clustering algorithm with training
Once the feature vectors are built for every pixel in the image, we use the training region D 1 detected as described in "Chan-Vese segmentation" section as a dictionary to drive the segmentation procedure extended to the whole image domain. We proceed as follows. First, we run a clustering algorithm over the whole image domain comparing the features defined in (3) in order to partition the image in a fixed number of K clusters. To do that, we use the well-known k-means algorithm. 2 After this preliminary step, we check which cluster has been assigned to the training region D 1 and simply identify in the clustered image which pixels lie in the same cluster. By construction, this corresponds to finding the regions in the image 'best-fitting' the training region in terms of the features defined in "Image descriptors: feature extraction" section, which is our objective. After a refinement step based on erosion/dilation of extracted regions, so as to remove or fill-in possibly misclassified pixels, we can finally extract the whole area to inpaint D. We report the results corresponding to Fig. 2 in Fig 3a, b.
Inpainting models
Once an accurate segmentation of the damaged areas is provided, the task becomes the actual restoration of the image contents in D by means of the available information in the region \ D . A standard mathematical approach solving such an inpainting problem consists in minimising an appropriate function E defined over the image domain , i.e. in where f denotes the given image to restore, � · � 2 is the Euclidean norm, and appropriately chosen positive parameter and χ \D denotes the characteristic function of the non-occluded image areas, so that for every pixel x ∈ : The second term in (5) is as a distance function between the given image f and the sought after restored image u in the intact part of the image. The multiplication of f − u by the characteristic function χ implies that this term is simply zero for the points in D, since there is no information available, while f − u for all the points in \ D has to be as small as possible. The term R typically encodes local information (such as gradient magnitude) which is the responsible of the transfer of information inside D by means of possibly non-linear models [28,34]. The transfer process is balanced with the trust in the data by the positive parameter . A classical choice of a gradientbased inpainting model consists in choosing i.e. the Total Variation of v [38]. As mentioned above such an image inpainting technique is not designed to transfer texture information. Furthermore, it fails in the inpainting of large missing areas. For our purposes we use (6) as an initial 'good' guess with which we initialise a different approach based on a non-local inpainting procedure as described in the following section.
Exemplar-based inpainting
We describe here the non-local patch-based inpainting procedure studied in [30,36] and carefully described in [42] from an implementation point of view. 3 In the following, we define for any point x ∈ the patch neighbourhood N x as the set of points in in a neighbourhood of x. Assuming that the patch neighbourhood has cardinality n, by patch around x we denote the 3n-dimensional vector P x = (u(x 1 ), u(x 2 ), . . . , u(x n )) where the points x i , i = 1, . . . n belong to patch neighbourhood N x . In order to measure 'distance' between patches, a suitable patch measure d can be defined, so that d(P x , P y ) stands for the patch measure between the patches around the two points x and y. We define then the Nearest Neighbour (NN) of P x as the patch P y around some point y minimising d.
For an inpainting application the task consists then in finding for each point x in the inpainting domain D the best-matching patch P y outside D. Assuming that each NN patch can be characterised in terms of a shift vector φ defined for every point in (i.e. assuming there exists a rigid transformation φ which shifts any patch to its NN), the problem can be formulated as the minimisation problem Heuristically, every patch in the solution of the problem above is constructed in such a way that in the damaged region D the patch has a correspondence (in the sense Second step in the detection of the damaged region. The k-means clustering algorithm is run on the whole image selection in terms of intensity-based image features, cf. "Image descriptors: feature extraction" and "A clustering algorithm with training" sections. The outputs of the binary segmentation algorithm shown in Fig. 2 are used as guidance for the clustering algorithm of the measure d) with its NN patch in the intact region \ D . Following [42], we use the following distance: From an algorithmic point of view, solving the model involves two steps: the first consists in computing (approximately) the NN patch for each point in D, so as to provide a complete representation of the shift map φ . This can be computationally expensive for large images.
In order to solve this efficiently, a PatchMatch [37] strategy can be applied. Afterwards a proper image reconstruction step is performed, where for every point in D the actual corresponding patch is computed. We refer the reader to [42] for full algorithmic details. A crucial ingredient for a good performance of the exemplar-based inpainting algorithm [30,36] is its initialisation. In particular, once the inpainting domain is known, a pre-processing step where a local inpainting model, such as the TV inpainting model (5) with (6), can be run to provide a rough, but reliable initialisation of the algorithm. 4 We report the results of the combined procedure in Fig. 4 and the overall work-flow of the algorithm in the diagram in Fig. 5.
Model parameters
For the segmentation of the training region D 1 within the inpainting domain D we use the activecontour MATLAB function by which the Chan-Vese algorithm can be called. For this we fixed the maximum number of iterations to maxiter= 1000 and use the default value as a tolerance on the relative error between iterates as a stopping criterion. We use the default values for the parameters µ and ν in (2). The subsequent clustering phase was performed by means of the standard MATLAB kmeans function after specifying a total of K = 35 labels to assign. The use of such a large value for K turned out to be crucial for an accurate discrimination. The automatic choice of the value of K for this type of applications is a matter of future research. The clustering was iteratively repeated 5 times to improve accuracy. Once the detection of the inpainting domain is completed, in order to provide a good initialisation to the exemplar-based model we use the TV inpainting model (4) with (6) with the value = 1000 and a maximum number of iterations equal to maxiter2= 1000 with a stopping criterion on the relative error between iterates depending on a default (8) tolerance. Finally, we followed [42] for the implementation of the exemplar-based inpainting model: for this we specified 12 propagation of iterations and tested different sizes for the patches. In order to avoid memory shortage, we restricted ourselves to patches of size 5 × 5 , 7 × 7 and 9 × 9.
Discussion and outlook
We proposed in this section a combined algorithm to retrieve image contents from two images of illuminated manuscripts shown in Fig. 1 where very large regions have been damaged. At first, our algorithm computes an accurate segmentation of the inpainting domain which is performed by means of a semi-supervised method exploiting distinctive features in the image. Then, taking the segmentation result as an input, the procedure is followed by an exemplar-based inpainting strategy (upon suitable initialisation) by which the damaged regions are filled.
The results reported in Figs. 4 and 6 confirm the effectiveness of the combined method proposed. In particular, when looking at the difference between standard local (TV) image inpainting methods and the exemplar-based one we immediately appreciate the higher reconstruction quality in the damaged regions, especially in terms of texture information. The method has been validated on several image details extracted from the entire images, and has been shown effective also for very large image portions with highly damaged regions.
In term of computational times, the segmentations in Fig. 3 are obtained in approximatively 15 min. The inpainting results in Fig. 4 are obtained in about 3 min for patches of size 5 × 5 and about 7 min for patches of size 7 × 7 . Overall the whole task of segmenting and inpainting the occluded regions takes approximatively 20 min per image of size 690 × 690 . However, these results highly depend on the size of the image, the size of the inpainting domain and the size of the patches chosen.
Future work could address the use of different features for the segmentation of the inpainting domain with similar methodologies, such as for instance texture features [43]. Furthermore, at an inpainting level, we observe that the reconstruction of fine details in very large damaged regions (such as the strings of the harp in Fig. 6) is very challenging due to the lack of correspondence with similar training patches in the undamaged region. For solving this problem a combination of exemplar-based and local structure-preserving inpainting models could be used. Calatroni et al. Herit Sci (2018) 6:56 Looking through the layers via osmosis filtering In the previous section the image content in the damaged areas of the illuminations is completely lost and it was estimated only from the information available in the rest of the picture. This, however, is not the only kind of degradation encountered in the process of restoration of illuminated manuscripts. In some cases parts of an illumination are painted over. In this section we discuss as a b c Fig. 4 Inpainting of damaged areas in Fig. 2. Once the inpainting domain is detected, the TV inpainting model (5, 6) is used to provide a good initialisation for the exemplar-based model (7). The final result shows the desired transfer of both geometric and texture information in the damaged areas. Patch size: 5 × 5 (upper row), 7 × 7 (bottom row) such an example the illuminations from the primer of Claude de France which illustrate the story of Adam and Eve in the garden of Eden. The two figures were originally depicted naked, as described in the book of Genesis but a later owner wanted them clothed with additional veils, leaves or beast skin added in the illumination, cf. Fig. 7.
The use of infrared imaging as shown for instance in Fig. 8 allows to look through these added layers, unveiling hidden structural information underneath the painted layer. All the input colour images and their reflectogram are freely available on the Fitzwilliam museum website 5 along with some more information about the manuscript, in particular the pigments used.
In this section we aim to fuse the details appearing in the near infrared reflectogram (IR) with the colours of the visible colour image, in particular the skin tones, to create a digital version of the illuminations as they could have looked before overpainting. Since we only have access to one near infrared reflectogram and we cannot chose the wavelength and have no information on the pigments used, we find ourselves in one of the following three situations: (i) the added cloth is transparent in the IR; (ii) the added cloth appears in the IR but without texture; (iii) the added cloth and its texture appear in the IR. The fact that the original pigments can also be IR transparent poses an additional challenge. For these different situations we use different methods all based on the use of the linear image osmosis model studied by Weickert et al. in [31].
In the following we first present the original parabolic linear osmosis equation studied in [31] and our slightly modified local elliptic formulation of osmosis [44]. Then we recall some of its common applications in image processing and finally apply our methods to digitally unveiling Adam and Eve in Claude De France's Primer in each of the different situations (i)-(iii) described above (cf. "IR transparent original pigments", "Over-paint with IR transparent texture" and "Non IR transparent overpaint texture: adding an inpainting step" sections).
The Osmosis model
The osmosis model has been introduced in [31] as a non-symmetric generalization of diffusion filters and as a new tool for image processing problems such as seamless cloning and shadow removal. The original parabolic equation for this model is Here u is the solution we are looking for and d is a given vector field defined on the image domain with values in R 2 that we call the drift-field. Typically d encodes information from the gradient of the desired solution u, thus it serves as a guide to the diffusion process. For a given positive image I, when d = d I := ∇I/I , it turns out that I is a trivial steady state (i.e. a solution for u t = 0 ) of Eq. (9). Under this choice, the vector field d I is called the canonical drift-field of I. Note that such drift-field is invariant to multiplicative changes of I.
Equation (9) is typically solved on the whole image domain under appropriate homogeneous Neumann boundary conditions. When applied to Cultural Heritage imaging this model has been successfully rendered computationally efficient by means of standard dimensional splitting techniques and applied, for instance, to Thermal-Quasi Reflectography (TQR) imaging and other similar applications in [45,46].
In the following, we look directly for the steady state of the previous equation, i.e. the elliptic equation, (9) u t = �u − div(du). and solve it on a small sub-domain D of the input image domain with mixed boundary conditions as in [44]. Restricting ourselves to a small domain has two main advantages: first, most of the image is supposed to be left untouched; secondly, the computational cost is much smaller. Moreover, having mixed boundary conditions allows for more flexibility in adapting (10) to the problem at hand. In particular, Dirichlet boundary conditions enforce the colour values on ∂D and a smooth transition of colour values across ∂D , which is appropriate if the image does not feature discontinuities (i.e. image edges) at the boundary of D. Neumann boundary conditions, on the other hand, prevent any diffusion across the boundary, ensuring clear colour discontinuities which is useful when the border of the mask is along an edge between two different colours appearing the same in the IR.
Common applications of the model
The osmosis equation has been proposed for several tasks [31], the most common being shadow removal and seamless cloning as an alternative to Poisson editing [47]. All these tasks share the idea of manipulating the canonical drift-field d I of one or more input images.
Shadow removal
The problem of shadow removal involves only one image and it is, as its name suggests, a process that takes as input an image with constant shadowed areas and gives as a result a shadow-free result. A constant shadow can be thought of as a multiplicative change in the domain of the shadowed region of the image. Since the canonical drift vector field is invariant to multiplicative change, the presence of the shadow is only encoded in the drift-field on the edge of the shadow. In an ideal case with a sharp shadow boundary, setting the drift field to zero there creates pure diffusion and results in a perfectly shadowless image [31].
Seamless cloning
Seamless cloning involves two input images that we will call the background image g and the foreground image f. This problem can be described as an improved copypaste process where some information of f is copied in a sub-domain D of g. That is, one directly replaces in D the colour information of g by the colour information of f. This leads to a rough result where the boundaries of the pasted region are quite noticeable. Seamless cloning consists in doing this copy-paste process in such a way that the boundaries of the pasted region are no longer noticeable and the transition from f to g is smooth and natural. To this end we create a drift-field d from the canonical drift-fields d g and d f associated to f and g, respectively, so that:
Applications to illuminated manuscripts
In an ideal case, the added pigments do not appear on the IR while the colours to be restored are perfectly encoded in the IR. In this case the problem is reduced to a simple seamless cloning application with Dirichlet boundary conditions. The drift-field of the colour image is replaced by the one from the infrared image on the sub-domain to be restored. However, unfortunately, such an ideal case is uncommon. For the illuminations of the primer, we encounter rather different scenarios. For instance, when the added cloth is IR transparent or has no texture in the IR, the osmosis equation is enough to get a satisfying result. When the texture of the added cloth appears in the IR, the osmosis equation is no longer enough and we have to add an inpainting step to our method. We describe this in a greater detail in the following.
IR transparent original pigments
In Fig. 8 To prevent this, we enforce Neumann boundary conditions along these edges to prevent any such diffusion. The results with and without the use of Neumann boundary conditions (represented as red lines in the mask) are presented in Fig. 8.
Over-paint with IR transparent texture
In Fig. 9, the added cloth on Adam is not IR transparent but it has little texture discernible on the IR and the original drawings appear clearly by transparency under it. This looks like a shadow in the IR as well as in the solution obtained with the method of the previous "IR transparent original pigments" section. Thus we mix seamless cloning with mixed boundary conditions and the shadow removal method. We replace the canonical drift-field of the colour image by the one of the IR in the region of interest. Then we put the drift-field to zero on the edge of the over-paint appearing in the IR. This method is illustrated in Fig. 9. The white lines of the mask are the areas where the drift-field is put to zero. In this figure we observe some transparent texture from the over-paint (over Adam's hip and at the bottom of Eve's veil). As expected, this texture appears in the final result.
Non IR transparent over-paint texture: adding an inpainting step
In the case of Fig. 10, the IR adds some useful information to the colour image, as shown by the result obtained using the method from the previous "Overpaint with IR transparent texture" section but a large amount of the added skirt texture, visible in the IR, is also present. To get rid of this unwanted texture, we put the drift field to zeros on the area corresponding to Adam's skin and manually segment the lines we want to keep. Note that this leads to a complete loss of texture in this region. To have a more natural looking result, we want to have some texture for the skin. While we can't recover the original texture with our inputs, the untouched part of the illumination gives us some example of texture for Adam's skin. This information is enough to use the exemplar-based inpainting algorithm described in "Exemplar-based inpainting" section, using as initialisation our result with missing texture. The final result on Adam's skin has probably not much in common with the original painting but it appears natural enough, so it can help to get a better idea of the illumination in its original state.
Preprocessing and parameters
As we just saw, such a complex restoration process necessitates significant user decisions. In fact the mask containing the sub-domain to be restored must be provided by the user as well as the edges along which Neumann boundary conditions should be applied and the sub-domain edges where the drift-field should be put to zero.
For our experiments we used the discretisation proposed in [31]. Then the linear system was solved using the MATLAB UMFPACK V5.4.0 LU solver. It took us at most 15 seconds to obtain the numerical solutions of the osmosis equation, our input images being respectively 901 × 1201 , 1001 × 1201 and 952 × 1248 for Figs. 8, 9 and 10. For Fig. 10 we only show a crop of our result of size 359 × 483 . For the inpainting step of Fig. 10, we used the implementation of the exemplar-based inpainting algorithm from [48] 6 with the NL-medians method, 9 × 9 patches, two scales and 4 iterations.
Discussion and future work
We proposed in this section a method to digitally remove over-paint from an illumination using infrared information. Although we do not claim that our result perfectly corresponds to the original state of the illumination, we believe that nonetheless it offers an idea of its original state. For our applications the results are mostly satisfying, especially when the added pigments do not appear on the IR or when the addition doesn't have too much texture visible in the IR. As the process necessitates some important user decisions, it is preferable to have input from an expert. From the IR alone we can only make educated guesses. Only outside information from an expert allows us to know which pigments have been over-painted, from examination under a microscope for example. This method is fast enough to allow fine tuning by the user as depending on the result the mask can be repeatedly improved. The quality of the output is highly dependent on the infrared wavelength and the pigments used for both the original painting and the over-paint.
Future work should address these difficulties and test the method on a larger dataset. An easy improvement would be to have an IR with the same resolution as the colour image to prevent the blur effect that we can observe. For the mask creation phase, a more automated segmentation detection could be inserted to have a first guess. In this work, we have only used the visible image and a single IR. Better results may be obtained by using several IR's where the wavelengths are chosen depending on the pigments used. In such a situation, the expert would only have to specify for each area which IR should be used.
Creating a 3D virtual scene from illuminated manuscripts
In recent years, certain museums and companies have taken a step beyond using digital technology to restore historic artwork, and have instead created 3D or animated versions of historic artwork that can only be experienced digitally. For example, the British Museum's Hutong Gallery recently created a 3D version of the 1623 painting "Reading in the Autumn Mountains" (originally painted during the Ming dynasty by the artist Xiang Shengmo). A video in which the viewer flies through the 3D painting can be found on their website [49]. Another example, which was shown at the Taipei Flora Expo in 2010/2011, features a Song Dynasty painting that was converted into an animation [50,51]. In this case, the animated painting was displayed on a specially designed screen, twenty feet wide and more than 360 feet long, mounted on the wall of the exhibition center. Finally, the Shanghai based company Motion Magic has created 3D versions of the paintings of Vincent Van Gogh, which viewers can walk around inside after putting on virtual reality goggles [52,53]. The result of these efforts is both a new kind of art and a new way of interacting with art. This trend is likely to get stronger as virtual reality becomes more mainstream and the demand for VR content increases.
In this section, we demonstrate the potential of these approaches by converting an illumination from the manuscript Annunciation by Simon Bening, Fitzwilliam Museum, MS 294b, Flanders, Bruges, (1522-1523), as well as the painting The Scream by Edvard Munch into stereo 3D (see Figs. 13 and 14). We do so using a 3D conversion pipeline originally developed for the conversion of Hollywood films. There, one is given the video shot Fig. 9 The texture of the over-paint is IR transparent ("Over-paint with IR transparent texture" section). Bottom left: we only applying the method of "IR transparent original pigments" section, the over-paint on Adam appears as some kind of shadow. Bottom right: after putting the drift-field to zero in the white areas of the mask, only some non IR transparent texture of the overpaint remains (on Adam's hip and the part of Eve's veil that covers the fence) from camera position p ∈ R 3 and orientation O ∈ SO(3) (corresponding to, for example, the left eye view), and the objective is to generate a plausible reconstruction of the video as it would appear from a perturbed position and orientation p + δp ∈ R 3 , O + δO ∈ SO(3) (corresponding to the view from the other eye). In some cases p and O, along with other relevant camera parameters such as field of view, may be given. In other cases, they must be estimated. In our case the process is the same, except that we have a manuscript (or painting) rather than a video. However, this introduces a subtle difference. In the case of converting a video shot with a real camera, although we might not know the associated camera parameters, we at least knew that they exist-but here, because the input is drawn by a human, existence is not given. In particular, depending on the artist, the drawing may or may not obey the laws of perspective. This is particularly noticeable in the case of The Scream-see Fig. 16.
Overview of a 3D conversion pipeline
Here we briefly go over the 3D conversion pipeline used in this paper. The steps of the pipeline are illustrated in Figs. 11 and 12. For more details, please see [33] or [28,Ch. 9.4].
1. Generate a rough but plausible 3D model of the scene, including a virtual camera with plausible parameters (parameters include position, orientation, field of view, possibly lens distortion, etc) placed within it. The 3D models do not have to be perfect, and are typically made a little larger than the objects they correspond to. This is because they will be "clipped" in step three. See Fig. 11a, where we show rough 3D models used for the Virgin Mary and Angel Gabriel. 2. Generate accurate masks for all objects in the scene. This is typically done by hand, but could also be done with the help of segmentation algorithms that are Fig. 10 The texture of the over-paint appears clearly on the IR ("Non IR transparent over-paint texture: adding an inpainting step" section). Bottom left: using the method of "Over-paint with IR transparent texture" section, the texture of Adam's skirt still appears clearly. Bottom center: we manually draw the underlying sketch and enforce pure diffusion on Adam's skin. It leads to a complete loss of texture. Bottom right: after the inpainting step, the result looks more natural then touched up. See Fig. 11b, where we show masks for the Virgin Mary and Angel Gabriel. 3. The camera is then transformed into a projector, which is used for two purposes. Firstly, the masks from the previous step are projected onto the rough 3D geometry from step 1, and used-much like a cookie cutter-to "clip" the geometry, throwing away the portion that is unneeded. See Fig. 11c, where we illustrate this for the 3D models of Mary and Gabriel. Secondly, the original image is then used as texture by projecting it onto the clipped geometry, as in Fig. 11d. 4. One or more new virtual cameras are added to the scene. If the original camera is taken to be either the right or left eye, then one additional virtual camera corresponding to the other eye is needed. However, sometimes the original camera position is taken to be half way between the two eyes, so that two virtual cameras (corresponding to the left and right eyes) are needed. These camera(s) will be used to render the 3D scene from one or more new viewpoints, in order to create a stereo pair. 5. Because the new camera(s) will typically see bits of background previously hidden behind foreground objects in the original view, inpainting of occluded areas is required. This is typically done using a toolbox of inpainting algorithms that are then touched up by hand. In our example, inpainting was done in Photoshop, using a combination of Content Aware fill and manual copy-pasting of patches by hand. See Fig. 12a, b, where we show the rendering of Annunciation from a new view, including in (a) the areas originally occluded by Mary and Gabriel, and in (b) the result after inpainting these areas. In reality, as this scene contains many more 3D objects than just Mary and Gabriel, what is shown in 12a is just a sampling of the inpainting problems that need to be solved.
Steps one, two, and the first half of step three can be thought of as generating a depth map for the image. The rough geometry generated in step one provides the smooth component of the depth map, while the masks generated in step two define the depth discontinuities, which are imposed on the geometry by the "clipping" in step three. Because the human eye is most sensitive to depth discontinuities, these have to be very accurate, but the 3D models do not. For example, in the conversion of Fig. 13a, the virgin Mary is modelled using just a few simple geometric primitives including an ellipsoid for her body, a sphere for her head, a cylindrical halo and a cone for the bottom of her dress. This is illustrated in Fig. 11a, where the geometry of the Angel Gabriel (also consisting of simple geometric primitives) is also shown.
Results and future work
The results of our 3D conversion of Annunciation are presented in Fig. 13, where we show the original manuscript (assumed to be the right eye view) side by side with the reconstructed left eye view. Similarly, Fig. 14 Fig. 15 for a closeup of this defect. To overcome this, one could modify the pipeline in the "Overview of a 3D conversion pipeline" section to first decompose semitransparent objects into two images (in this case, the pure halo and the background). This is something we would like to investigate in the future. The conversion of The Scream illustrates a nuance arising in the 3D conversion of paintings-namely that paintings may not obey the laws of perspective. In this case, due to the failure of perspective, it is not possible to extrapolate the railing of the bridge into the occluded area behind the screaming figure without introducing a bend or "kink". This is illustrated in Fig. 16 where we also show the "kink" we had to introduce into the 3D model of the bridge in order to make 3D conversion of this painting possible.
Conclusion
An adequate mathematical analysis and processing of images arising in the arts and humanities needs to meet special requirements: • There is often particular domain expertise which any analysis should ideally make use of. For instance, when digitally restoring an image, the integration of related images such as paintings from the same artist, could be taken into account. In what we have discussed this concept is used to the extent that a dictionary of characteristic structures in the undamaged part of the illuminations was created and used to fill in the lost contents in the damaged regions, compare Figs. 4, 6. This could be driven much further, expanding the dictionary by illuminations or details of illuminations from the same artist.
• The results achieved in Figs. 10,9,8 show a possible use-case for scientific imaging in art restoration or art interpretation. Indeed, we believe that the integration of different types of scientific imaging such as infrared imaging, are likely to give benefit to image analysis methods and so the latter should be able to capture those. • Explainability of results is crucial. There is clearly a balancing act to be made between hand-crafted analysis that captures expert knowledge and a black-a b c d Fig. 11 3D conversion Pipeline. Here we illustrate steps one to three of the 3D conversion pipeline presented in the "Overview of a 3D conversion pipeline" section. First, in a, rough 3D geometry is generated for all objects in the scene (here, only the Virgin Mary and Angel Gabriel are shown). Next, in b, accurate masks are generated for all objects (again, only Mary and Gabriel are shown). In c, the camera is turned into a projector and the masks from b onto the rough 3D geometry from a. This projection is then used to "clip" the 3D models by throwing away the portion of the geometry not falling within the projection. Finally, in d, the clipped geometry is "painted" by projecting the original image onto it a b Fig. 12 3D conversion Pipeline continued. Here we illustrate steps four and five of the 3D conversion pipeline presented in "Overview of a 3D conversion pipeline". In a, we have rendered the 3D scene from a new vantage point. This will be the left eye view of a stereo pair in which the right eye view is the original manuscript. Areas in red are occluded by Mary and Gabriel in the original manuscript and must be inpainted. In b, we see the result of inpainting, which in this case is done using a combination of Content-Aware Fill and manual copy pasting of image patches. a b Fig. 13 3D conversion of an illuminated manuscript. The illuminated manuscript considered here is Annunciation by Simon Bening, Fitzwilliam Museum, MS 294b, Flanders, Bruges, (1522-1523). The restored manuscript (a) is converted into a stereo 3D pair. To view the resulting stereo 3D image without glasses, first cross your eyes so that each image splits in two. Make the middle two images overlap, and then bring the superimposed image into focus (try varying your distance from the computer screen) box, data-driven image analysis approach. In particular, the latter should ideally have an interpretable mathematical representation that gives rise to new conclusions. In this paper we have solely considered model-based and hence explainable solutions to art restoration and interpretation problems. The growing emergence of deep learning solutions to various image analysis tasks provides an alternative approach to these problems, at the moment however without a proper explanation. • Relevant characteristics are often hidden in very fine details of the artwork, like a brushstroke in a painting. Capturing these fine details in a digital format results in high-resolution images that an image analysis method should be capable of processing. This means there is a demand for computationally-efficient image analysis methods. • Digital processing and manipulation of artwork opens up a myriad of possibilities of analysing and processing, but also of experiencing, understanding and reinterpreting artwork. As an example we have shown 3D conversion and its possible use-cases in the presentation of art, cf. Fig. 13 for instance.
With the above in mind, we have discussed a selected subset of mathematical approaches and their possible a b In the process of coverting The Scream into 3D we discovered, as in a, that the railing of the bridge in the painting does not obey the laws of perspective. To get around this issue, we had to introduce a "kink" into our 3D model of the bridge, as in b | 11,954.6 | 2018-03-19T00:00:00.000 | [
"Mathematics",
"Art",
"Computer Science"
] |
Single-cell analysis reveals differential regulation of the alveolar macrophage actin cytoskeleton by surfactant proteins A1 and A2: implications of sex and aging
Background Surfactant protein A (SP-A) contributes to lung immunity by regulating inflammation and responses to microorganisms invading the lung. The huge genetic variability of SP-A in humans implies that this protein is highly important in tightly regulating the lung immune response. Proteomic studies have demonstrated that there are differential responses of the macrophages to SP-A1 and SP-A2 and that there are sex differences implicated in these responses. Methods Purified SP-A variants were used for administration to alveolar macrophages from SP-A knockout (KO) mice for in vitro studies, and alveolar macrophages from humanized SP-A transgenic mice were isolated for ex vivo studies. The actin cytoskeleton was examined by fluorescence and confocal microscopy, and the macrophages were categorized according to the distribution of polymerized actin. Results In accordance with previous data, we report that there are sex differences in the response of alveolar macrophages to SP-A1 and SP-A2. The cell size and F-actin content of the alveolar macrophages are sex- and age-dependent. Importantly, there are different subpopulations of cells with differential distribution of polymerized actin. In vitro, SP-A2 destabilizes actin in female, but not male, mice, and the same tendency is observed by SP-A1 in cells from male mice. Similarly, there are differences in the distribution of AM subpopulations isolated from SP-A transgenic mice depending on sex and age. Conclusions There are marked sex- and age-related differences in the alveolar macrophage phenotype as illustrated by F-actin staining between SP-A1 and SP-A2. Importantly, the phenotypic switch caused by the different SP-A variants is subtle, and pertains to the frequency of the observed subpopulations, demonstrating the need for single-cell analysis approaches. The differential responses of alveolar macrophages to SP-A1 and SP-A2 highlight the importance of genotype in immune regulation and the susceptibility to lung disease and the need for development of individualized treatment options.
Background
Surfactant protein A (SP-A) is one of the many molecules that contribute to lung immunity. It binds foreign particles and organisms that invade the lungs and targets them for clearance via phagocytosis by the alveolar macrophages. It also enhances clearance of inflammatory cells after the inflammation has been resolved and finally gets removed by the alveolar macrophages themselves. The SP-A/alveolar macrophage interaction enhances the alveolar macrophage functions such as chemotaxis, chemokine and reactive oxidant production, phagocytosis, and endolysosomal trafficking.
SP-A knockout mice have extensively been used as a model to elucidate the role of SP-A in lung innate immunity. It has been shown that SP-A regulates inflammation following ozone exposure [1] and other lung challenges [2], while its absence leads to increased susceptibility to infection by many organisms, such as Klebsiella pneumoniae [3,4], Pseudomonas aeruginosa [5], group B Streptococcus [6], and viruses [7]. Humans, however, have two distinct SP-A genes, namely, SP-A1 and SP-A2, and a number of variants for each one, indicating that the roles of SP-A in immunity are complex and finely tuned. The different SP-A variant molecules display distinct functions, including, but not limited to, cytokine production [8,9], phosphatidylcholine secretion [10], and phagocytic activity [11][12][13].
Previous studies performed in our laboratory showed that the proteome of alveolar macrophages from SP-A knockout (KO) mice treated with a single intrapharyngeal dose of SP-A resembles that of the wild-type mice [14] and there are sex differences in the response of the alveolar macrophage proteome to SP-A [15]. The regulation of the alveolar macrophage phenotype by SP-A becomes more complex when one takes into account the fact that the proteomes of alveolar macrophages derived from humanized transgenic mice that express either SP-A1 or SP-A2 are significantly different [16] and that there are sex differences in the responses of the macrophage cellular proteomes to different SP-A variants [17].
The alveolar macrophages perform most of the searchand-destroy functions (chemotaxis and phagocytosis) in the lung. The actin cytoskeleton is a crucial mediator of these processes. Interestingly, the proteomic analyses mentioned above identified proteins that are related to the actin cytoskeleton as being differentially regulated by SP-A1 and SP-A2. In the present study, we performed a single-cell imaging analysis to determine the in vitro and ex vivo effects of SP-A1 and SP-A2 on the distribution of F-actin in the alveolar macrophages. Our findings demonstrate diverse roles of SP-A1 and SP-A2 in the regulation of the alveolar macrophage cytoskeleton and, hence, the cell's motility and activation status.
Animals
All mice used in the present study were on the C57BL6/J background and were either 8 weeks (young) or 8-10 months (old) in age. SP-A KO and humanized transgenic SP-A1, SP-A2, and SP-A1/SP-A2 mice were generated on the C57BL6/J background [18]. The animals were raised in the breeding facility of the Penn State College of Medicine. All the mice were maintained in a pathogenfree environment or in barrier facilities with free access to food and water. The study was approved by the Institutional Animal Care and Use Committee of the Penn State College of Medicine. For each experiment, equal numbers of age-matched males and females (n = 3) were used.
Collection of bronchoalveolar lavage fluid
Mice were euthanized using a mixture of ketamine and xylazine, and bronchoalveolar lavage (BAL) fluid was collected [15] by instilling 1 mM EDTA/PBS into the lungs through a tracheal cannula using 0.5 mL of solution five times, for a total of 2.5 mL. For each instillation, the solution was applied and withdrawn three times with concurrent chest massage. The BAL fluid was centrifuged at 150g for 5 min at 4°C, and the cell pellet was washed once with 1 mL of 1 mM EDTA/PBS. Total cells were counted with the use of a hemocytometer, and cytocentrifuge slides were prepared for differential cell counting with the Fisher Healthcare Protocol Hema 3 stain (Fisher Scientific) according to the manufacturer's instructions.
Culture of mAMs
Following collection of the BAL fluid, AMs were washed with serum-free RPMI-1640 containing 2 mM glutamine and 1× antibiotic-antimycotic solution. The AMs were then resuspended in the same medium and plated on UV-sterilized coverslips (No. 1, 18-mm diameter) in 12well cell culture plates. After allowing the cells to attach for 90 min, the medium was changed and the cells were incubated overnight at 37°C in the presence of 5 % CO 2 . In a set of experiments, the AMs were treated with 10 μg of SP-A1, 10 μg of SP-A2, 5 μg SP-A1 + 5 μg SP-A2, or 10 μg SP-A1 + 10 μg SP-A2 for 60 min before staining.
Preparation of purified SP-A
Purified SP-A was prepared from CHO cells as described previously [8]. Briefly, stably transfected CHO-derived cell lines expressing either SP-A1 (6A 2 ) or SP-A2 (1A 0 ) were cultured for 5 days in the expression medium as described [8], and the conditioned media were collected. SP-A was purified using mannose affinity chromatography, concentrated, and stored at −80°C until use. The concentration of lipopolysaccharides (LPS) in the SP-A preparations was measured using the Limulus Amebocyte Lysate QCL-1000 assay (Lonza, Walkersville, MA). LPS was below the detection limit of the assay in all the preparations used in the present study. Purity of SP-A was determined by silver staining (Bio-Rad Silver Stain Plus Kit, Bio-Rad).
Staining for F-actin, G-actin, and cell membranes Following the overnight culture, the AMs were washed once with PBS, fixed with 3.7 % paraformaldehyde for 10 min at room temperature, permeabilized with 0.5 % Triton X-100, and washed three times before incubation for 30 min in staining solution, containing one unit of Alexa Fluor 488-conjugated phalloidin (Molecular Probes, Eugene, OR). In some experiments, the staining solution also contained 0.3 μM of Alexa Fluor 594conjugated (deoxyribonuclease 1) DNase I and 5 μg/mL of Alexa Fluor 647-conjugated wheat germ agglutinin (Molecular Probes). Following three more washes, the coverslips were mounted on cover glasses with ProLong Gold Mounting Medium with DAPI (Life Technologies, Eugene, OR). Depending on the distribution of F-actin, the cells were blindly (without knowledge of the sex or the genotype of the animal) categorized as belonging to one of the four subpopulations: A, minimal F-actin staining; B, perinuclear F-actin staining; C, diffuse cytoplasmic F-actin; or D, existence of cytoplasmic protrusions (filopodia or podosomes).
Image acquisition and data analysis
For light microscopy experiments of F-actin staining, the mAMs were imaged using a Nikon TE-2000 PFS fluorescence microscope, using a ×60/1.40 phase contrast, oil immersion objective lens. The images were captured using a Photometrics Coolsnap HQ2 digital camera (0.11 μm/pixel) and saved as TIFF files. The acquisition time was 100 ms for all images acquired. Nikon NIS-Elements v.3.0 software was used for image acquisition, and Adobe Photoshop CD4 was used for image analysis. The AMs were analyzed by manually drawing an area around the border of each cell, and a cell-free area of equal size was used for background subtraction. The exported data included the average fluorescence per pixel, the number of pixels in each selected area (cell), and the sum of fluorescence intensity for each cell.
For confocal microscopy experiments (multicolor imaging), a Leica AOBS SP8 laser scanning confocal imaging microscope (Leica, Heidelberg, Germany) at the Penn State College of Medicine Imaging Core was used. Images were acquired using a high-resolution Leica ×60/1.3 Plan-Apochromat oil immersion objective lens. The laser lines were produced by a UV diode (for DAPI) and an 80-MHz white light laser (Leica SP8 AOBS module, for Alexa Fluor conjugates). The emission signals were collected sequentially using acousto-optical beamsplitter (AOBS) tunable filters using a pinhole Airy size of 1.0. The bandwidths of the highly sensitive HyD detectors were set in a way that prevented fluorescence bleed-through. The images were obtained with the use of the Leica Application Suite (LAS AF), and image analysis was performed using the Imaris v.7.3 software (Bitplane). The fluorescent signals were rendered with the Surface tool in the Surpass View of the software, and the statistics from each channel were exported to Microsoft Excel spreadsheets.
Statistical analysis
All statistical analyses were performed with GraphPad Prism v.6.0 and SAS v.9.4. Data are displayed as the mean ± SEM. Comparisons of means were analyzed with one-way ANOVA or two-tailed unpaired t test with Welch's correction for non-equal variances. In certain experiments, two-way ANOVA was used for sex (male vs. female) and genotype (KO vs. SP-A1 vs. SP-A2), followed by planned comparisons using Fisher's least significant difference. Comparisons of frequencies were performed with chi-square contingency analysis tests. In order to account for interindividual differences among the animals used in the study, a hierarchical analysis (animal-culture well-cell) was designed in the SAS software consisting of generalized linear mixedeffects models with Poisson regression. The models contain random effects to account for (1) the correlation due to measurements from the same animal, and for (2) similar environmental conditions within a well, followed by comparisons of genotype (or treatment, in the case of in vitro experiments) and phenotype, along with Bonferroni corrections for multiple comparisons. A P value ≤0.05 was considered statistically significant.
Cell area and F-actin content in the alveolar macrophages of humanized transgenic SP-A mice
Following previous studies that highlighted the importance of SP-A in the macrophage cytoskeleton [14,17], we measured the cell size and F-actin content of alveolar macrophages isolated from young and old transgenic mice carrying either the SP-A1 (6A 2 ) or SP-A2 (1A 0 ) gene. We examined both the cell area and the mean per pixel fluorescence intensity of phalloidin staining as a measurement of the F-actin content per cell [14]. The macrophages of old mice (8-10 months old) that carry either the SFTPA1 (SP-A1) or the SFTPA2 (SP-A2) cDNA were significantly larger than the macrophages from the KO mice (Fig. 1a). In fact, the macrophages from the SP-A1 mice had a significantly larger area than the ones from the SP-A2 mice. Upon examination of the mean F-actin fluorescence per pixel, the macrophages from the SP-A1 mice showed significantly higher fluorescence intensity compared to those from both KO and SP-A2 (Fig. 1b). Taken together, these data indicate that there are higher levels of polymerized actin in the alveolar macrophages from SP-A1 mice than in the ones from SP-A2 or KO mice.
In order to determine whether there are sex differences among the macrophages from different genetic backgrounds, we analyzed the same data taking into account the sex of the animals. Two-way ANOVA indicated that there is no sex-by-genotype interaction affecting the area of the macrophages (F(2,12) = 0.26, P = 0.7733). Sex was not found to be a factor that influences the area of AMs in old mice (F(1,12) = 0.1035, P = 0.4738). In accordance with previously demonstrated differences in AM size in response to SP-A1 or SP-A2 proteins in vitro [14], the main effect for genotype was significant (F(2,12) = 1.100, P = 0.0057). We used post hoc tests to compare the effects of SP-A1 and SP-A2 on the area of the AMs. In both males and females, the AM area of SP-A1 mice is significantly larger compared to that of KO. In SP-A1 males, the area is significantly larger than their SP-A2 counterparts. As far as the females are concerned, the only significant difference observed is between the SP-A1 and the KO mice (Fig. 1c). Interestingly, the significant difference in cell size observed between the SP-A2 and KO genotypes ( Fig. 1a) is not reflected when the analysis includes the sex of the donor animals, because male SP-A2 AMs seemingly tend to be smaller than male KO cells, but AMs from female SP-A2 tend to occupy larger area. As far as the fluorescence intensity of F-actin is concerned, the interaction effect is significant (F(2,12) = 2.124, P < 0.00001). Fluorescence intensity is significantly higher in the SP-A1 male mice compared to KO males and SP-A2 males.
In order to determine whether the SP-A-induced changes in the alveolar macrophage cytoskeleton are innate to the mice or show as a cumulative result of the prolonged exposure to SP-A, we performed the same study using young (8 weeks old) animals, either KO or expressing SP-A1, SP-A2, or both (co-expressors). When the analysis is performed without factoring in the sex of the animals, there is no significant difference in the area of the alveolar macrophages among the different groups (Fig. 1e), but the F-actin fluorescence of the co-expressors was significantly reduced in comparison to the KO, SP-A1, and SP-A2 macrophages (Fig. 1f ). However, there are significant differences in both area and actin fluorescence that are masked by the exclusion of sex as a factor, as can be seen in Fig. 1g, h. Two-way ANOVA with sex and genotype as the factors demonstrates that the sex-by-genotype interaction is significant for the cell area of the AMs (F(3,16) = 0.6264, P = 0.0285). Post hoc analysis reveals that the area of the macrophages of male mice expressing both SP-A1 and SP-A2 is significantly larger than the area of KO male, SP-A2 male, and SP-A1 male mice. Among female mice, the area of the macrophages from SP-A1 mice is significantly smaller than the area from the cells from all the other female mice. When comparing mice of opposite sex, while the KO and the SP-A2 macrophages do not have sex differences, the SP-A1 and the co-expressing female macrophages have significantly smaller area than the males of the same genetic background (Fig. 1g). There is no significant effect of the sex-by-genotype interaction on the F-actin fluorescence intensity (F(3,16) = 0.1969, P = 0.4029), but a significant e-h 2-month-old mice. All data shown represent measurements made on cells obtained from three mice (n = 3). Panels c, d and g, h represent the data from panels a, b and e, f, respectively, taking into account the sex of the animals. Number of cells per bar ranges from n = 22 to n = 79. Comparisons were made by two-way ANOVA followed by Fisher's LSD test. *P ≤ 0.05; **P ≤ 0.01; ***P ≤ 0.001; ****P ≤ 0.0001; #significant difference from all other genotypes of the same sex effect was observed for each factor (for sex, P = 0.0001; for genotype, P < 0.0001). Post hoc tests show that the F-actin fluorescence intensity is strikingly reduced in both male and female co-expressing macrophages. With the exception of the co-expressing mice, there are no differences among the different genotypes within the same sex, but there is a significant increase in the staining intensity of the cytoskeleton of cells from female mice compared to male mice of the same background (Fig. 1h). The same tendency exists in both KO and co-expressing mice, but it does not reach significant levels.
Overall, these data confirm that the actin-related cytoskeleton of alveolar macrophages is affected by different SP-A variants in a complicated way and the effects of SP-A seem to accumulate over time. Previous work has shown that in vivo administration of SP-A1 to mice influenced the actin-related proteins of AMs from male and female mice differently [17], which is in accordance with the results presented here (Fig. 1d, g, and h). These results may have functional consequences, since the ratio of SP-A1 to total SP-A in BAL has been shown to change depending on age and on whether the patients suffer from pathologic conditions, such as cystic fibrosis and alveolar proteinosis [19], and asthma [20].
Subpopulations of alveolar macrophages based on the distribution of F-actin
It has long been known that the alveolar macrophages are a diverse set of cells, with many subpopulations of distinct phenotypes and responses to disease or to environmental challenges [21][22][23][24]. Our initial imaging study revealed that the distribution of F-actin was not identical in all the alveolar macrophages from the bronchoalveolar fluid of mice. In order to confirm this observation, we performed confocal imaging of the phalloidin-stained alveolar macrophages. We identified four distinct phenotypes based on phalloidin staining. We named them phenotypes A, B, C, and D, in an order of increasingly activated status: (a) largely depolymerized actin, with actin "puncta" discernible throughout the cytoplasm (Fig. 2a); (b) actin tightly packed in the perinuclear region (Fig. 2b); (c) actin is diffuse in the cytoplasm (Fig. 2c); and (d) actin is taking part in the formation of cytoplasmic protrusions, i.e., filopodia and podosomes (Fig. 2d). Of note, there were cells that were negative for phalloidin staining, despite their seemingly intact nuclei. These cells were omitted from the study. All cells were blindly categorized as belonging to one of the phenotypes, cell area and phalloidin staining fluorescence were measured, and the respective measurements for each cell were backtracked to the animal of origin after the completion of each experiment. In order to verify that there are differences among the phenotypes, we compared the intensity of phalloidin staining from all cells. Phenotypes A and D are significantly different from all other phenotypes, but there are no differences in the F-actin content between phenotypes B and C (Fig. 2e). As far as the area of the cells is concerned, phenotype D is significantly different than phenotypes A and B (Fig. 2f ).
Differences in the monomeric actin pools among the alveolar macrophage subpopulations
Since the phenotypic differences among the different alveolar macrophage subpopulations based on F-actin content are so prominent, we examined whether the G-actin content shows similar differences among the phenotypes. This would give some insight as to whether the observed distinct phenotypes are a result of cytoskeleton remodeling, or a more permanent condition that affects the pools of monomeric actin as well. In a subset of the experiments, we co-stained the cells with Alexa Fluor 488conjugated phalloidin and Alexa Fluor 594-conjugated DNase I, which has been shown to specifically bind to G-actin (monomeric) within the cell [25] (Fig. 3a). The G-actin content shows an increasing trend from phenotype "A" to phenotype "D," with significant differences between the pairs A-B, A-C, and A-D (Fig. 3b). This result indicates that the observed phenotypes are the result of changes in gene expression that include the actin cytoskeleton, and not the result of acute events that would lead to cytoskeleton remodeling. If that were the case, the total cellular actin content (F-actin + Gactin) would not differ among the different phenotypes. Notably, there are no differences among the phenotypes in the F-/G-actin ratio (Fig. 3c), which means that the degree of F-actin polymerization does not differ among the phenotypes.
In vitro administration of SP-A proteins alters the alveolar macrophage subpopulations
In order to determine whether SP-A has any effect on the frequency of the alveolar macrophage subpopulations, we examined whether in vitro short-term administration of SP-A would have any acute effects on the frequency of the phenotypes. After isolating alveolar macrophages from SP-A KO mice as described above, we added SP-A1, SP-A2 (10 μg each), or both (low dose 5 μg each or high dose 10 μg each) to the cultured macrophages for 1 h before fixing and staining the cells. While there were no baseline differences among the controls, there were significant sex differences, as a response of alveolar macrophages to SP-A2, and a combination of both SP-A1 and SP-A2 in the higher dose used, confirming that the response to surfactant proteins is, at least partially, sex-dependent and also dose-dependent (as the combination of the proteins in the lower dose did not yield significant difference) (Fig. 4a, b). In males, the distribution among the phenotypes of the cells exposed to a high dose of both SP-A1 and SP-A2 was significantly different from that of KO mice (Fig. 4c). In females (Fig. 4d), SP-A2 leads to an increase of the "A" phenotype subpopulation and a concurrent decrease of the "D" subpopulation, but this effect does not reach significance due to the small number of cells counted. SP-A1 has significant differences with both KO and the high combination dose, signifying that the two proteins may have opposing roles in the regulation of AMs. Furthermore, administration of a high dose of both proteins in females leads to a moderate, yet still significant, increase of the "A" phenotype compared to KO, which verifies the observation that SP-A1 and SP-A2 have opposing effects.
Effects of SP-A1 and SP-A2 on the distribution of alveolar macrophage phenotypes
In order to determine whether SP-A1 and SP-A2 have similar effects on the frequency of the alveolar macrophage subpopulations within the organism, we back traced the cells that fall under each phenotypic category to the genotype of the mice, meaning male or female mice that express either SP-A1 or SP-A2 (both old and young) or both (young only) as well as SP-A KO mice as control. In order to compare the differences among the phenotypic subpopulation frequencies within the different genotypes, we performed a hierarchical analysis that accounted for effects that may stem from cells originating from the same animal and/or cells that were cultured within the same well. The distribution of the phenotypes for the old mice can be seen in Fig. 5a. There is a significant difference between the male and female SP-A KO mice, with more cells from the male mice seemingly being in a more activated state (compare sums of the "C" and "D" phenotypes between KOM and KOF in Fig. 5a). SP-A2 induces inactivation of macrophages in male mice with an increase of the "A" phenotype. However, SP-A2 has the opposite effect on female mice, as the proportion of cells of the "C" phenotype is increased, at the expense of "A" and "B" cells. This opposite effect of SP-A2 on male and female mice leads to sex differences between the SP-A2 male and female mice. SP-A1 has a similar, albeit more moderate, effect on male mice as SP-A2, i.e., it increases the percentage of cells that Fig. 2 Phenotypes of the subpopulations of alveolar macrophages based on F-actin distribution. a Punctate stain indicates scattered cytoplasmic F-actin. b F-actin is found only in the juxtanuclear region. c F-actin is diffuse in the cytoplasm. d F-actin is cortical with cytoplasmic protrusions (podosomes). The dashed lines outline the periphery of the cells. Scale bars 10 μm. e Quantification of the F-actin content in the cells. Total number of cells, n = 1627. f Cell area of the cells for each phenotype. A subset of the cells (n = 305) from e was used for area measurements. Comparisons were made by two-tailed t test with Welch's correction (equal variances not assumed). *P < 0.05; **P ≤ 0.01; ***P ≤ 0.001; ****P ≤ 0.0001 are seemingly less active. Similarly to male mice, the effect of SP-A1 on the macrophages from female mice is more moderate than that of SP-A2. Cells of the "C" phenotype are significantly increased in comparison to KOF (P = 0.02799), and that effect generates sex differences in the SP-A1 mice as well. Of note, there are no differences between the two variants in either male or female mice.
As far as the young mice are concerned (Fig. 5b), there is a baseline difference between the KO male and female mice, and this trend is opposite to the one in old mice. In the old KO mice, alveolar macrophages with high F-actin content (phenotypes "C" and "D") are the prevalent phenotypes among males and less than 50 % among females. In the young KO male mice, phenotypes "C" and "D" combined account for~42 % of the total number of macrophages. The same combination ("C" and "D") for female mice is 65 %. SP-A2 (but not SP-A1) has an effect on male mice, while in female mice, each variant changes the distribution of AMs significantly. In addition, a significant difference is observed between SP-A1 and SP-A2 in female mice. As is the case with the old mice, there are no differences between the variants in male mice. When both SP-A1 and SP-A2 are expressed, the mice are similar to their KO counterparts, regardless of sex. Interestingly, there are still sex differences in the presence of either one or both variants. In the case of SP-A2, and the combination of both SP-A1 and SP-A2, the sex differences can be attributed to both the "A" and "D" phenotypes, whereas for SP-A1, only cells of the "A" phenotype are significantly different. No differences are observed among male mice that carry at least one SP-A variant, although the SP-A1 vs. SP-A2 P value is close to the level of significance, 0.05475. In females, the difference between the two variants shows statistical significance. At the same time, the distribution of cells from SP-A2 mice is similar to the one observed in SP-A1/SP-A2-expressing mice, meaning that the effect of SP-A2 is the major factor driving the observed phenotype.
Discussion
In a series of studies, we have examined the effects of SP-A on the alveolar macrophage phenotype, as expressed by its cellular proteome, both in vitro and in vivo [14][15][16][17]. These studies revealed that the protein expression pattern of the alveolar macrophages is highly dependent on the microenvironment of the cells and the variant of SP-A involved is a major factor affecting the proteome. It was also demonstrated that there are significant sex differences in the response of alveolar macrophages after in vivo treatment of SP-A KO mice with SP-A from human bronchoalveolar lavage [15] or SP-A variants expressed in cell culture [17]. Images were acquired sequentially with a Leica SP8 AOBS laser scanning confocal system with software-adjusted detection spectra to avoid bleed-through of the signals. The pseudocolors were assigned by ImageJ. Scale bars 10μΜ. b Means of G-actin signal intensity per phenotype. Total number of cells, n = 286. c Means of F-/G-actin ratio per phenotype. All cells from b were used for the measurements. Comparisons were made by twotailed t test with Welch's correction (equal variances not assumed). *P ≤ 0.05; **P ≤ 0.01; ****P ≤ 0.0001 The studies mentioned above showed that the expression of proteins related with the actin cytoskeleton is affected by SP-A. Such proteins include, for instance, the F-actin capping protein, capping protein of the actin filament, the light chain of myosin, and the Rho GDP dissociation inhibitor, among others. Actin-related proteins are clearly of the utmost importance for the alveolar macrophages because many macrophage functions, such as motility, chemotaxis, and phagocytosis, are based on an intact cytoskeletal network with the potential for rapid remodeling.
The present study builds and expands on previous findings regarding the effects of SP-A on the actin cytoskeleton. Using an imaging approach, we determined the effect that SP-A1 and SP-A2 have on the alveolar macrophage phenotype, by studying the distribution of F-actin in the cells. In vitro assays with SP-A variants expressed by CHO cells and use of alveolar macrophages from humanized transgenic mice expressing SP-A1, SP-A2, or both revealed that (i) SP-A1 and SP-A2 differentially affect the alveolar macrophage subpopulations, (ii) the response to SP-A variants differs between males and females, and (iii) the response differs between young and old mice.
Initially, as a proof of concept, we used epifluorescence microscopy to determine whether there are any differences among the alveolar macrophages from mice of different genotypes and different ages, both male and female. It became evident that the phenotype and activation status of the alveolar macrophages, as demonstrated by the cell size and the F-actin mean per pixel fluorescence, is associated with the genotype of the donor animals. The results showed complicated response patterns, especially when factoring in the sex of the animals (Fig. 1). An important observation of this particular experiment was the diversity of phenotypes in the bronchoalveolar lavage cells under baseline conditions, even within the same field of view during imaging. Phalloidin staining of macrophages has been performed before, and the increase in cell size and/or the appearance of filopodia is indicative of macrophage activation, e.g., after LPS challenge [26,27], while the polymerized actin in the unstimulated macrophages is perinuclear or cortical [28]. It has been reported that M1-activated (pro-inflammatory) macrophages have a dense static actin network (similar to our observed phenotype "B") whereas the actin of M2-activated (anti-inflammatory) macrophages is more diffuse and randomly distributed (similar to our observed phenotype "C") [29]. In a study that examined the effects of SP-A on the actin distribution of alveolar macrophages, it was reported that SP-A causes directional expansion of filopodia [30]. Our confocal imaging experiments revealed distinct phenotypes that ranged from actin puncta (phenotype A), dense actin network (phenotype B), diffuse actin network (phenotype C), or protruding filopodia (phenotype D). We consider the distinct phenotypes as representing different stages of activation.
In order to understand whether the SP-A-induced phenotypic changes of the alveolar macrophages are related to rapid cytoskeletal remodeling or more permanent changes, we co-stained cells with fluorescent DNase I, which has been shown to bind to G-actin (monomeric) in the cytoplasm [25]. Comparison of means of G-actin fluorescence units among the phenotypes revealed that The statistical comparisons were performed in SAS with a hierarchical model accounting for cells from the same animal and cells that were cultured within the same well and can be seen in the stacked bar chart at the bottom panel. b Donut charts of the distribution of the alveolar macrophage phenotypes in young mice of the indicated genotype. The number of cells counted per genotype can be seen in the donut hole. The statistical comparisons were performed pairwise as in the old mice and can be seen in the stacked bar chart at the bottom panel. *P ≤ 0.05; **P ≤ 0.01; ***P ≤ 0.001; ****P ≤ 0.0001 the actin cytoplasmic pools follow a trend similar to the one of F-actin (F-/G-actin ratio is not significantly different among the observed phenotypes), indicating that the different phenotypes do not come as a result of rearrangement of the cytoskeleton but probably due to differences in the gene expression of actin itself as well as actin-related proteins. Thus, phenotype "A" is likely to represent cells that are detaching, presumably due to apoptosis, since the actin metabolism in these cells appears to be highly dynamic. Phenotype "B" is probably quiescent, whereas phenotypes "C" and "D" could represent early and late activation status, respectively.
In vitro administration of SP-A1 and SP-A2 proteins from the two SP-A genes to macrophages from SP-A KO mice altered the frequency of each phenotype. SP-A2 caused depolymerization of F-actin in cells from females, as demonstrated by the increase of phenotype A cells. When both SP-A1 and SP-A2 were used to treat AMs from KO mice, moderate effects were observed in both sexes compared to SP-A2 alone. This can be explained by the potential counterbalancing actions of SP-A1 and SP-A2. The higher proportion of activated alveolar macrophages from male mice exposed to SP-A2 (Fig. 4) is in accordance with a functional assay previously reported by our lab [13]. In that study, alveolar macrophages from male rats were challenged with P. aeruginosa in the presence or absence of SP-A1 or SP-A2, and it was found that the phagocytic index of cells exposed to SP-A2 was higher than that of cells exposed to SP-A1.
The opposing actions of SP-A1 and SP-A2 are supported by the ex vivo results from the present study (Fig. 5). Cells isolated from young humanized transgenic male mice carrying SP-A1 did not show a significantly different phenotype distribution compared to KO, but cells from SP-A2 males showed a higher proportion of cells of the "C" and "D" phenotypes compared to KO, meaning that the macrophages of these mice are readily active. These results are consistent with our previous work [17] which showed that the proteome of macrophages from male mice is not as responsive to SP-A administration in vivo as females, as far as the actin-related group of proteins is concerned. Indeed, the present study verifies that in female mice, there is significant response to both SP-A1 and SP-A2 and this response is different between the two variants. However, co-expression of both variants counterbalances the effect and brings the cells to a distribution similar to the one observed in KO mice. Even though the distribution pattern is similar between the KO and the co-expressing cells, we speculate that these cells may be functionally distinct, with the cells exposed to SP-A1 and SP-A2 being primed for activation, whereas the KO cells may not. Further studies are needed to elucidate this.
Unlike the young male mice, cells isolated from older male transgenic mice were different from the KO. The punctate pattern of F-actin was more prominent in both SP-A1 and SP-A2 mice, with SP-A2 showing a higher proportion of cells with low actin levels. In older female mice, there are also gene-specific differences, with SP-A2 demonstrating more cells of the "C" phenotype and SP-A1 having higher proportion of the "D" phenotype. Aging has been reported to affect the immune system in general and macrophage functions in particular. Impairments of the immune system that are related to age (termed immunosenescence) seem to contribute to increased susceptibility to infectious diseases, as well as cancer and autoimmunity [31]. In splenic macrophages, TLR4 signaling has been shown to be compromised during aging which leads to a perturbed pattern of cytokine expression [32,33], although the exact molecular mechanisms are not well understood. Importantly, SP-A has been shown to directly interact with TLR4 [34] and its co-receptor CD14 [35] and modulates the TLR4 activity. Reduced expression of CD14 has been proposed as the reason for the impaired TLR4-related signaling during aging [32]. Macrophage polarization towards the M1 and M2 phenotypes has also been reported to be affected by age. Although age does not result in a skew towards either phenotype [36], old mice seem to have higher numbers of M2 macrophages in the spleen, lymph nodes, and bone marrow [37]. Interestingly, this observation could be in accordance with our study, if we consider alveolar macrophages of the "C" phenotype in the present study to be similar to M2 polarized, as described elsewhere [29,38].
There are sex differences in the distribution of subpopulations of AMs among mice of the same genotype in both the in vitro (Fig. 4) and the ex vivo (Fig. 5) experiments. These results come as a continuation of a long series of studies that have demonstrated such differences in both the molecular and the physiological level [3,4,15,17,39,40]. Hormonal regulation of the actin cytoskeleton in AMs could be an important factor in the generation of sex differences. Sex hormones have been shown to affect cytoskeletal proteins in other systems [41][42][43] and also the production of surfactant in the developing lung [44][45][46]. Whether sex hormones actually affect the cytoskeleton of AMs through SP-A or other mechanisms remains to be investigated.
Functionally, the regulation of the distribution of subpopulations of AMs by SP-A variants may explain differences observed in their phagocytic activity [11][12][13] and the course of lung disease [4,39,40,47]. Sex differences concerning susceptibility in lung disease, such as asthma [48], chronic obstructive pulmonary disorder [49], and even lung cancer [50], have been widely reported, although the latter remains controversial [51]. | 8,559.6 | 2016-03-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
orthoDr: semiparametric dimension reduction via orthogonality constrained optimization
orthoDr is a package in R that solves dimension reduction problems using a orthogonality constrained optimization approach. The package serves as a unified framework for many regression and survival analysis dimension reduction models that utilize semiparametric estimating equations. The main computational machinery of orthoDr is a first-order algorithm developed by Wen&Yin (2013) for optimization within the Stiefel manifold. We implement the algorithm through Rcpp and OpenMP for fast computation. In addition, we developed a general-purpose solver for such constrained problems with user-specified objective functions, which works as a drop-in version of optim(). The package also serves as a platform for future methodology developments along this line of work.
Introduction
Dimension reduction is a long-standing problem in statistics and data science. While the traditional principal component analysis (Jolliffe, 1986) and related works provide a way of reducing the dimension of the covariates, the term "sufficient dimension reduction" is more commonly referring to a series of regression works originated from the seminal paper on sliced inverse regression (Li, 1991). In such problems, we observe an outcome Y ∈ R, along with a set of covariates X = (X 1 , . . . , X p ) T ∈ R p . Dimension reduction models are interested in modeling the conditional distribution of Y given X, while their relationship satisfies, for some p × d matrix B = (β 1 , . . . , β p ), where represents any error terms and h, with a slight abuse of notation, represents the link function using X or B T X. One can easily notice that when d, the number of columns in B, is less than p, a dimension reduction is achieved, in the sense that only a d dimensional covariate information is necessary for fully describing the relationship (Cook, 2009). Alternatively, this relationship can be represented as (Zeng and Zhu, 2010) which again describes the sufficiency of B T X. Following the work of Li (1991), a variety of methods have been proposed. An incomplete list of literature includes Cook and Weisberg (1991); Cook and Lee (1999); Yin and Cook (2002); Chiaromonte et al. (2002); Zhu et al. (2006); Li and Wang (2007); Zhu et al. (2010b,a); Cook et al. (2010); Lee et al. (2013); Cook and Zhang (2014); Li and Zhang (2017). For a more comprehensive review of the literature, we refer the readers to Ma and Zhu (2013b). One advantage of many early developments in dimension reduction models is that only a singular value decomposition is required to obtain the reduced space parameters B through inverse sliced averaging. However, this comes at a price of assuming the linearity assumption (Li, 1991), which is almost the same as assuming that the covariates follow an elliptical distribution (Li and Dong, 2009;Dong and Li, 2010). Moreover, some methods require more restrictive assumptions on the covariance structure (Cook and Weisberg, 1991). Many methods attempt to avoid these assumptions by resorting to nonparametric estimations. The most successful ones include Xia et al. (2002) and Xia (2007). However, recently a new line of work started by Zhu (2012b,a, 2013a) shows that by formulating the problem into semiparametric estimating equations, not only we can avoid many distributional assumptions on the covariates, the obtained estimator of B also enjoys efficiency. Extending this idea, Sun et al. (2017) developed a framework for dimension reduction in survival analysis using a counting process based estimating equations. The method performs significantly better than existing dimension reduction methods for censored data such as Li et al. (1999); Xia et al. (2010) and Lu and Li (2011). Another recent development that also utilizes this semiparametric formulation is Zhao et al. (2017), in which an efficient estimator is derived.
Although there are celebrated theoretical and methodological advances, estimating B through the semiparametric estimating equations is still not a trivial task. Two challenges remain: first, by a careful look at the model definition 1, we quickly noticed that the parameters are not identifiable unless certain constraints are placed. In fact, if we let A be any d × d full rank matrix, then (BA) T X preserves the same column space information of B T X, hence, we can define h * ((BA) T X, ) accordingly to retain exactly the same model as (1). While traditional methods can utilize singular value decompositions (SVD) of the estimation matrix to identify the column space of B instead of recovering each parameter (Cook and Lee, 1999), it appears to be a difficult task in the semiparametric estimating equation framework. One challenge is that if we let B change freely, the rank of the B matrix cannot be guaranteed, which makes the formulation meaningless. Hence, for both computational and theoretical concerns, Ma and Zhu (2012b) resorts to an approach that fixes the upper d × d block of B as an identity matrix, i.e., B = (I d×d , B * T ) T , where B * is a (p − d) × d matrix that sits in the lower block of B. Hence, in this formulation, only B * needs to be solved. While the solution is guaranteed to be rank d in this formulation, as pointed out by Sun et al. (2017), this approach still requires correctly identifying and reordering of the covariate vector x such that the first d entries are indeed important, which creates another daunting task. Another challenge is that solving semiparametric estimating equations requires the estimation of nonparametric components. These components need to be computed through kernel estimations, usually the Nadaraya-Watson type, which significantly increases the computational intensity of the method considering that these components need to be recalculated at each iteration of the optimization. Up to date, these drawbacks remain as the strongest criticism of the semiparametric approaches. Hence, although enjoying superior statistical asymptotic properties, are not as attractive as a traditional sliced inverse type of approaches such as Li (1991) and Cook and Weisberg (1991).
The goal of our orthoDr package is to develop a computationally efficient optimization platform for solving the semiparametric estimating equation approaches proposed in Ma and Zhu (2013a), Sun et al. (2017) and possibly any future work along this line. Revisiting the rank preserving problem of B mentioned above, we can essentially set a constraint that where I is a d × d identity matrix. A solution of the estimating equations that satisfies the constraint will correctly identify the dimensionality-reduced subspace. This is known as optimizing on the Stiefel manifold, which is a class of well-studied problems (Edelman et al., 1998). A recent R development (Martin et al., 2016) utilizes quasi-Newton methods such as the well known BFGS method on the Riemannian manifold (Huang et al., 2018). However, second order optimization methods always require forming and storing large hessian matrices. In addition, they may not be easily adapted to penalized optimization problems, which often appear in high dimensional statistical problems Zhu et al. (2006); Li and Yin (2008). On the other hand, first-order optimization methods are faster in each iteration, and may also incorporate penalization in a more convenient way Wen et al. (2010). By utilizing the techniques developed by Wen and Yin (2012), we can effectively search for the solution in the Stiefel manifold, and this becomes the main machinery of our package. Further incorporating the popular Rcpp (Eddelbuettel and François, 2011) and RcppArmadillo (Eddelbuettel and Sanderson, 2014) toolboxes and the OpenMP parallel commuting, the computational time for our package is comparable to state-of-the-art existing implementations (such as ManifoldOpthm), making the semiparametric dimension reduction models more accessible in practice.
The purpose of this article is to provide a general overview of the orthoDr package (version 0.6.2) and provide some concrete examples to demonstrate its advantages. orthoDr is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=orthoDr and GitHub at https://github.com/teazrq/orthoDr. We begin by explaining the underlying formulation of the estimating equation problem and the parameter updating scheme that preserves orthogonality. Next, the software is introduced in detail using simulated data and real data as examples. We further demonstrate an example that utilizes the package as a general purpose solver. We also investigate the computational time of the package compared with existing solvers. Future plans for extending the package to other dimension reduction problems are also discussed.
Model description Counting process based dimension reduction
To give a concrete example of the estimating equations, we use the semiparametric inverse regression approach defined in Sun et al. (2017) to demonstrate the calculation. Following the common notations in the survival analysis literature, let X i be the observed p dimensional covariate values of subject i, Y i = min(T i , C i ) is the observed survival time, with failure time T i and censoring time C i , and is observed. We are interested in a situation that the conditional distribution of failure time T i |X i depends only on the reduced space B T X i . Hence, to estimate B, the estimating equation is given by where the operator vec(·) is the vectorization of matrix. Several components are estimated nonparasitically: the function ϕ(u) is estimated by sliced averaging, where u is chosen such that there are hn number of observations lie between u and u + u. The conditional mean function E X|Y ≥ u, B T X = z is estimated through the Nadaraya-Watson kernel estimator In addition, the the conditional hazard function at any time point u can be estimated by However, this substantially increase the computational burden since the double kernel estimator requires O(n 2 ) flops to calculate the hazard at any given u and z. Instead, an alternative version using Dabrowska (1989) can greatly reduce the computational cost without compromising the performance. Hence, we estimate the conditional hazard function by which requires only O(n) flops. In the above equations (5), (6) and (8), h is a pre-specified kernel bandwidth and K h (·) = K(·/h)/h, where K(·) is the Gaussian kernel function. By utilizing the method of moments estimators (Hansen, 1982) and noticing our constraint for identifying the column space of B, solving for the solution of the estimating equations (4) is equivalent to Essentially all other semiparametric dimension reduction models described in Ma and Zhu (2013a), and more recently Ma and Zhang (2015) (2017) and many others can be estimated in the samimilar fashion as the above optimization problem. However, due to the difficult in the constrains and the purpose of identifiability, all of these methods resort to either fixing the upper block of the B matrix as an identity matrix or adding a penalty of B T B − I F to preserve the orthogonality constraint. There appears to be no existing method that solves (9) directly. Here, we utilize Wen and Yin (2012)'s approach which can effectively tackle this problem.
Orthogonality preserving updating scheme
The algorithm works in the same fashion as a regular gradient decent, except that we need to preserve the orthogonality at each iteration of the update. As described in Wen and Yin (2012), given any feasible point B 0 , i.e., B 0 T B 0 = I, which can always be generated randomly, we update B 0 as follows. Let the p × d gradient matrix be Then, utilizing the Cayley transformation, we have with the orthogonality preserving property B T new B new = I. Here, A = GB 0 T − B 0 G T is a skewsymmetric matrix. It can be shown that {B new (τ)} τ≥0 is a descent path. Similar to line search algorithms, we can then find a proper step size τ through a curvilinear search. Recursively updating the current value of B, the algorithm stops when the tolerance level is reached. An initial value is also important for the performance of nonconvex optimization problems. A convenient initial value for our framework is the computational efficient approach developed in Sun et al. (2017), which only requires a SVD of the estimation matrix.
The R package orthoDr
There are several main functions in the orthoDr package: orthoDr_surv, ortho_reg and ortho_optim. They are corresponding to the survival model described perviously (Sun et al., 2017), the regression model in Ma and Zhu (2012b), and a general constrained optimization function, respectively. In this section, we demonstrate the details of using these main functions, illustrate them with examples.
Semiparametric dimension reduction models for survival data
The orthoDr_surv function implements the optimization problem defined in Equation (9), where the kernel estimations and various quantities are implemented and calculated within C++. Note that in addition, the method defined previously, some simplified versions are also implemented such as the counting process inverse regression models and the forward regression models, which are all described in Sun et al. (2017). These specifications can be made using the method parameter. A routine call of the function orthoDr_surv proceed as orthoDr_surv(x, y, censor, method, ndr, B.initial, bw, keep.data, control, maxitr, verbose, ncore) • x: A matrix or data.frame for features (numerical only).
• y: A vector of observed survival times.
• censor: A vector of censoring indicators.
• method: The estimating equation method used.
-"forward": forward regression model with one structural dimensional.
• ndr: The number of structural dimensional. For method = "dn" or "dm", the default is 2. For method = "forward" only one structural dimension is allowed, hence the parameter is suppressed.
• B.initial: Initial B values. Unless specifically interested, this should be left as default, which uses the computational efficient approach (with the CPSIR() function) in Sun et al. (2017) as the initial. If specified, must be a matrix with ncol(x) rows and ndr columns. The matrix will be processed by Gram-Schmidt if it does not satisfy the orthogonality constrain.
• bw: A kernel bandwidth, assuming each variables have unit variance. By default we use the Silverman rule-of-thumb formula Silverman (1986) to determine the bandwidth This bandwidth can be computed using the silverman(n,d) function in our package.
• keep.data: Should the original data be kept for prediction? Default is FALSE.
• control: A list of tuning variables for optimization, including the convergence criteria. In particular, epsilon is the size for numerically approximating the gradient, ftol, gtol, and btol are tolerance levels for the objective function, gradients, and the parameter estimations, respectively, for judging the convergence. The default values are selected based on Wen and Yin (2012) .
• verbose: Should information be displayed? Default is FALSE.
• ncore: Number of cores for parallel computing when approximating the gradients numerically. The default is the maximum number of threads.
We demonstrate the usage of orthoDr_surv function by solving a problem with generated survival data.
• s2: A matrix for the second column space (e.g., B).
• method: -"dist": the Frobenius norm distance between the projection matrices of the two given matrices, where for any given matrix B, the projection matrix P = B(B T B) −1 B T .
-"trace": the trace correlation between two projection matrices tr(P P)/d, where d is the number of columns of the given matrix.
-"canonical": the canonical correlation between B T X and B T X.
• x: The design matrix X (default = NULL), required only if method = "canonical" is used.
We compare the accuracy of the estimations obtained by the method ="dm" and "dn". Note that the "dm" method enjoys double robustness property of the estimating equations, hence the result is usually better.
Semiparametric dimension reduction models for regression
The orthoDr_reg function implements the semiparametric dimension reduction methods proposed in Ma and Zhu (2012b). A routine call of the function orthoDr_reg proceed as orthoDr_reg (x, y, method, ndr, B.initial, bw, keep.data, control, maxitr, verbose, ncore) • x: A matrix or data.frame for features (numerical only).
• y: A vector of observed continuous outcome.
• method: We currently implemented two methods: the semiparametric sliced inverse regression method ("sir"), and the semiparametric principal Hessian directions method ("phd").
-"sir": semiparametric sliced inverse regression method solves the sample version of the estimating equation -"phd": semiparametric principal Hessian directions method that estimates B by solving the sample version of • ndr: The number of structural dimensional (default is 2).
• B.initial: Initial B values. For each method, the initial values are taken from the corresponding traditional inverse regression approach using the dr package. The obtained matrix will be processed by Gram-Schmidt for orthogonality.
• bw, keep.data, control, maxitr, verbose and ncore are exactly the same as those in the orthoDr_surv function.
To demonstrate the usage of orthoDr_reg, we consider the problem of dimension reduction by fitting a semi-PHD model proposed by Ma and Zhu (2012b).
Parallelled gradient approximation through OpenMP
The estimation equations of the dimension reduction problem in the survival and regression settings usually have a complicated form. Especially, multiple kernel estimations are involved, which results in difficulties in taking derivatives analytically. As an alternative, numerically approximated gradients are implemented using OpenMP. A comparison between a single core and multiple cores (4 cores) is given in the following example. Results from 20 independent simulation runes are summarized in Table 1. The data generating procedure used in this example is the same as the survival data used in Section 2.3.1. All simulations are performed on an i7-4770K CPU. R> t0 = Sys.time() R> dn.fit = orthoDr_surv(dataX, Y, Censor, method = "dn", ndr = ndr, + ncore = 4, control = list(ftol = 1e-6)) R> Sys.time() -t0 General solver for orthogonality constrained optimization ortho_optim is a general purpose optimization function that can incorporate any user defined objective function f (and gradient function if supplied). The usage of ortho_optim is similar to the widely used optim() function. A routine call of the function proceed as ortho_optim(B, fn, grad, ..., maximize, control, maxitr, verbose) The R Journal Vol. 11/2, December 2019 ISSN 2073-4859 • B: Initial B values. Must be a matrix, and the columns are subject to the orthogonality constrains. It will be processed by Gram-Schmidt if not orthogonal.
• fn: A function that calculates the objective function value. The first argument should be B. Returns a single value.
• grad: A function that calculate the gradient. The first argument should be B. Returns a matrix with the same dimension as B. If not specified, a numerical approximation is used.
• maximize: By default, the solver will try to minimize the objective function unless maximize = TRUE.
• The parameters maxitr, verbose and ncore works in the same way as introduced in the previous sections.
To demonstrate the simple usage of ortho_optim as a drop-in function of optim(), we consider the problem of searching for the first principle component for a data matrix.
We found that "LRBFGS" and our orthoDr package usually achieve the best performance, with functional value decreases the steepest in the log scale. In terms of computing time, "LRBFGS" and orthoDr performers similarly. Although "LRTRSR1" has similar computational time, its functional value falls behind. This is mainly because the theoretical complexity of second-order algorithms is similar to first order algorithms, both are of order O(p 3 ). However, it should be noted that for a semiparametric dimension reduction method, the major computational cost is not due to the parameter updates, rather, it is calculating the gradient since complicated kernel estimations are involved. Hence, we believe there is no significant advantage using either "LRBFGS" or our orthoDr package regarding the efficiency of the algorithm. However, first order algorithms may have an advantage when developing methods for penalized high-dimensional models.
Examples
We use the Concrete Compressive Strength (Yeh, 1998) dataset as an example to further demonstrate the orthoDr_reg function and to visualize the results. The dataset is obtained from the UCI Machine Learning Repository.
Concrete is the most important material in civil engineering. The concrete compressive strength is a highly nonlinear function of age and ingredients. These ingredients include cement, blast furnace slag, fly ash, water, superplasticizer, coarse aggregate, and fine aggregate. In this dataset, we have n = 1030 observation, 8 quantitative input variables, and 1 quantitative output variable. We present the estimated two directions for structural dimension and further plot the observed data in these two directions. A non-parametric kernel estimation surface is further included to approximate the mean concrete strength.
Discussion
Using the algorithm proposed by Wen and Yin (2012) for optimization on the Stiefel manifold, we developed the orthoDr package that serves specifically for semi-parametric dimension reductions problems. A variety of dimension reduction models are implemented for censored survival outcome and regression problems. In addition, we implemented parallel computing for numerically appropriate the gradient function. This is particularly useful for semi-parametric estimating equation methods because the objective function usually involves kernel estimations and the gradients are difficult to calculate. Our package can also be used as a general purpose solver and is comparable with existing manifold optimization approaches. However, since the performances of different optimization approaches could be problem dependent, hence, it could be interesting to investigate other choices such as the "LRBFGS" approach in the ManifoldOptim package.
Our package also serves as a platform for future methodology developments along this line of work. For example, we are currently developing a personalized dose-finding model with dimension reduction structure (Zhou and Zhu, 2018). Also, when the number of covariates p is large, the model can be over-parameterized. Hence, applying a L 1 penalty can force sparsity and allow the model to handle high-dimensional data. To this end, first-order optimization approaches can have advantages over second-order approaches. However, persevering the orthogonality during the Cayley transformation while also preserve the sparsity can be a challenging task and requires new methodologies. Furthermore, tuning parameters can be selected through a cross-validation approach, which can be implemented in the future. | 5,048.4 | 2018-11-28T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Visual Evoked Potential as a Clinical Tool to Evaluate Changes in Brain Function Associated with Concussion
Significance: Concussions or mild traumatic brain injuries (mTBIs) are the most common form of traumatic brain injury. There is lack of reliable, diagnostic tools in healthcare for the treatment of mTBI. A Visual Evoked Potential (VEP) approach will enhance practitioners in diagnosing and treating mTBI. Purpose: The aim of this study was to evaluate changes in brain function associated mTBI/post-concussion syndrome (PCS) using Visual Evoked Potential (VEP). We examined the effects of mTBI on early visual processing to determine if the signs and symptoms of mTBI reflected the failure of retinal signal latency based on the paradigm that low spatial frequency (LSF) are processed faster than high spatial frequency (HSF) when measured at the primary visual cortex. Methods: The VEP of participants were measured by biasing the magnocellular pathways using a low Michelson contrast, temporally modulated phase reversing checkerboard stimulus pattern, and successive spatial frequency (SF) from LSF to increasing HSF. The VEP values of participants from the control group where compared with VEP values obtained from the mTBI/ PCS group. Participants from the mTBI/PCS group underwent a 6 week-7 month treatment. Results: The mTBI/PCS group failed to process the LSF faster than the increasing HSF, which resulted in loss of top-down information that leads to meaningful perception. Conclusions: The mTBI/PCS subjects resolved their VEP deficits and returned to the proper temporal organization of latency, compared with the results from the control group.
Introduction
Concussions remain the most common form of Traumatic Brain Injury (TBI) in the world today. This traumatic event occurs when two shock waves pass through the brain resulting in bruising of the underlying cortex. Subsequently, additional acceleration/deceleration forces result in stretching or twisting, and rotational acceleration forces lead to the stretching and shearing of neural and glia components resulting in a diffuse axonal injury (DAI) [1]. Synaptic disruption, resulting from the onset of a neuro-chemical cascade, then initiates metabolic and pathophysiological disruptions of brain function [2,3].
The American Congress of Rehabilitation Medicine has stated that concussions occur when any one of the following the functional manifestations occur: "any period of loss of consciousness, loss of memory for events immediately before or after an accident, any alteration of mental state (feeling dazed, disoriented, or confused), and focal neurological deficits that may or may not be transient". Other indications of a concussion include "loss of consciousness for 30 minutes or less, an initial Glascow Coma Scale (GCS) of 13-15, and/ or posttraumatic amnesia not greater than 24 hours" (American Congress of Rehabilitation Medicine, 1993). If one of these manifestations occurred, then the condition is classified as Concussion complications are manifested by a combination of specific and non-specific symptoms that may include, headache, dizziness, light sensitivity, fatigue, mental fog, orientation problems, sleep disturbance, dizziness, and loss of balance [5,6]. The symptoms of a mTBI have been associated with the Magnocellular (M), and dorsal stream pathway processing. In addition to the symptoms, other signs include oculomotor dysfunction, such as convergence insufficiency, poor fusional reserves, accommodative disturbances, saccadic and pursuit movement disorders [7]. If these signs and symptoms continue long after the expected recovery period (estimated as anywhere from 10 days to 3 months) this is classified as Post-Concussion Syndrome (PCS). Unlike mTBIs, there seems to be a lack of consensus regarding the definition of PCS [5,8].
Increasing evidence supports a thalamic hypothesis as the central mechanism for global cognitive impairment from mTBI/ PCS [9]. King et al. [10] examined brain displacement and deformation using various concussive forces on cadavers. Their findings indicated a primary injury to the thalamus. In addition, Little et al. [11] discovered damages to cortical-subcortical fibers projecting to and from the thalamus, which contributed to chronic impairment in cognition and behavior, otherwise known as a secondary injury.
Except for the most severe brain injury cases, function resumed because neural connections create new networks, which can by-pass the damaged connections. Utilizing a top -down visual therapy program with recruitment from cortical areas of the brain will assist in the recovery of function and elimination of mTBI/PCS symptoms among subjects. Chang et al. described that for vision therapy to be effective, motivation, feedback, repetition, sensorymotor mismatch and multi-sensory integration are necessary components to enhance neuroplasticity changes [6].
Neuropsychological testing is the commonly used approach for evaluating the signs and symptoms of mTBI/PCS. A very widely used neuropsychological test tool is the ImPact test. Resch et al. [12] reported variation in test-retest reliability for ImPact metrics. Their data suggested that a multi-faceted approach is better for concussion assessment. Other evaluation tools for mTBI/PCS include neurological testing, structural imaging, and blood tests looking for cellular factors, with diminutive emphasis on functional electrodiagnostic testing.
Numerous investigations have recognized specific cellular factors in the blood as potential biomarkers for the detection of mTBI [13]. In February 2018 The US Food and Drug Administration (FDA) approved a blood test to aid with concussion evaluation in adults [14]. Known as Banyan BTI (Brain Trauma Indicator), the test measures levels of two protein biomarkers --ubiquitin carboxy-terminal hydrolase-L1 and glial fibrillary acidic protein.
This testing aims to reduce unnecessary radiation exposure from imaging to 'ensure that each patient is receiving the right imaging exam, at the right time, with the right radiation dose [15]. Diffusion Tensor Imaging (DTI), the preferred imaging technique, is used to evaluate the microscopic changes associated with DAI and glia disruption [16][17][18]. However, neuro-imaging does not show the dysfunctional effects or after effects of the neurochemical cascade; this test also does not reveal the extent of mitochondrial damage because of the injury [3].
The Visual Evoked Potential (VEP) test represents the response of the visual cortex to stimuli presented in the visual field. It is a very commonly used clinical test for ruling out disorders associated with the visual pathways. The VEP uses spatial frequency analysis to determine the stability of the afferent visual pathway. The aim of this study was to present VEP testing as an added clinical tool aimed to help fill the gaps in mTBI/PCS testing.
Material and Methods
The subject's M pathways were biased with temporally modulated phase reversing checkerboard stimulus pattern using successive spatial frequencies (SF) while simultaneously measuring the VEPs waveform parameters as an indicator of the bioelectric signal temporal order. Using the VEP to evaluate the retinal signal of the M pathway through the thalamus lateral geniculate nucleus (LGN) to the visual cortex, the researchers used a low contrast checkerboard pattern reversal stimuli, which created the subjective impression that the squares of the stimulus 'stream' across the screen as though they were moving [19].
Participants
The control group consisted of fifty-four participants ranging in age from 13-66 years old ( Figure 1A: 33 males and 21 females). The control group participants were required to have no history of brain injury, neurological disease or any history of medications or substances that can affect the VEP. Each underwent a comprehensive visual examination including oculomotor binocular vision assessment and retinal evaluation with visual field. The participants were also tested with the Diopsys NOVA Vision Testing Systems VEP ad hoc module to measure their electrophysiological visual function response to successively presented SF of 16x16, 32x32, 64x64 checkerboard stimuli at a 15% Michelson contrast level, with pattern reversals. Fifty-two mTBI/PCS patients ranging in age from 13-74 years old ( Figure 1B: 32 males and 20 females) comprised the experimental group. All subjects underwent comprehensive visual examination identical to the control group and were also tested with Diopsys NOVA Vision Testing Systems VEP ad hoc module to measure their electrophysiological visual function response identical to the normal group. The treatment group consisted of 27 patients ( Figure 1C:13 males and 14 females). All subjects in the treatment group received vision therapy, as described below in section.
Vision therapy
The vision therapy treatment protocol administered to the experimental group included multi-sensory training components. The vision therapy program lasted from 6 weeks to 7 months depending on the onset and severity of signs and symptoms among the experimental participants. The components of the program exercises included "Top Down Processing" programming as described by Cohen and Chang [6], random stimulus movement from saccadic to smooth ocular motion, ocular-vestibular integration with a head sensor feedback, proprioceptive balance board movements with vibration cues, eye fixation biofeedback monitoring, and vestibular sway detection during all activity.
Statistical analysis
Microsoft Excel® spreadsheet software was used to plot participant distribution and to compute confidence intervals of mean latencies.
Results
Results obtained in this study showed reproducible VEP indications of dysfunction associated with SF processing and its correlation with the signs and symptoms associated with mTBI.
The control group VEP values exhibited a temporal order of organization of subsequent neural impulses to visual cortex indicated by an upward slope (Figure 2A, 2B). The VEP results for the mTBI/PCS group showed a disorganization of subsequent neural impulses resulting in a skewed, non-upward slope. This skewed, non-upward slope observed in the mTBI/PCS can be interpreted as temporal disorder of the subsequent retinal impulse to the visual cortex ( Figure 3A, 3B). Upon completion of their vision therapy, the treatment group was re-tested, the data was compared to their original baseline. The VEP results after vision therapy showed a return to normal temporal order of the subsequent increasing SF (Figure 4) among mTBI/PCS subjects.
Discussion
Experimental studies have shown that natural scenes, although complex, can be processed by the visual cortex quickly, which indicated that simple and efficient coding processes are involved. Current models of visual perception suggest that the first step of visual perception exists by the extraction of simple features at different SF. The LSF is conveyed by the fast-magnocellular pathways that provides coarse information about a visual stimulus involved in the attentional capture and processing of overall stimulus organization, shape and structure. The HSF conveys finer information about the visual stimulus using the slower parvocellular pathways, that conducts high resolution visual information of the object (e.g. information required for the identification of finegrained edges, borders and color [19]. Kauffmann et al. [20] & Butler et al. [21] reported that early stage processing is initiated by the rapid LSF and magnocellular pathways (M), which are two important factors effecting processing at higher stages. Deficits in early-stage visual processing significantly predicted higher cognitive and behavioral discrepancies. These findings were confirmed by Butler et al. [21] who reported the existence of early stage visual processing dysfunction in schizophrenia patients by neuroimaging studies, further support our hypothesis that dysfunction within low-level visual pathways involved the thalamocortical radiations [21].
Grossman et al. [9] advocated thalamic injury as a major cause for mTBI symptoms. Sherman [22] proposed the LGN as a useful model for understanding the circuit features found throughout the thalamus. The VEP is not a direct measure of the thalamus, however it plays an important role in the processing of the retinal signal to the primary visual cortex and may indeed play a role in the higher processing stage as proposed by Butler et al. [21] Considerable evidence exists that SF processing takes place in a default coarseto-fine order. Studies using non-human primates suggested that the fast M-pathway LSF access the primary visual cortex and dorsal cortical stream analyzing visual input alerted the parvocellular pathway to primary visual cortex facilitated the slower HSF contribution for recognition of different categories of visual stimuli. Results obtained in this study showed reproducible VEP indications of dysfunction associated with SF processing and its correlation the signs and symptoms associated with mTBI. This is in contrast to the normal group, which showed no signs or symptoms of mTBI and exhibited the expected paradigm of successive SF.
As described by Chang et al. [6] for vision therapy to be effective, motivation, feedback, repetition, sensory-motor mismatch and intermodal integration are necessary components to enhance neuroplasticity changes and following mTBI, function resumes as many neural connections reroutes by creating new neural connections, and these new networks can by-pass the loss of connections. By utilizing top -down visual therapy with recruitment from those cortical areas, VEP assisted in the recovery of function and elimination of symptoms as well as the return of the proper order of LSF latency and HSF latency.
Conclusion
The VEP is a practical clinical tool to assess the functional integrity of the retinal signal following a mTBI. The neural foundation of SF processing involved in early stage visual processing is critical for entrance to higher cortical processing. Deficits in early-stage visual processing significantly predict higher cognitive deficits. The VEP testing provided a subcortical level of evaluation when administered properly cannot be faked or misrepresented. The mTBI/PCS individuals showed slower latency for LSF eliciting the signs and symptoms involving scene perception with motion in the retinal image. When the VEP indicated the proper order of latency from coarse to fine then the individuals' symptoms and signs resolved. The VEP test and vision therapy helped close the gap in the evaluation and resolution of mTBI/PCS. | 3,021.2 | 2019-06-21T00:00:00.000 | [
"Psychology",
"Medicine",
"Biology"
] |
ENERGY EFFICIENT DISTRIBUTED CLUSTERING AND SCHEDULING ALGORITHM FOR WIRELESS SENSOR NETWORKS WITH NON-UNIFORM NODE DISTRIBUTION
- The lifetime of Randomly Distributed wireless sensor networks gets effected do to the imbalanced energy consumption in sensor nodes. The energy consumption is balanced among sensor nodes by using the efficient clustering algorithm that is proposed in this research paper. there are two cluster phases of EECS one is setup phase and another is steady state phase. The cluster election algorithm selects cluster head which uses sensor nodes local information. In the steady state phase, the time slots are allotted for member nodes as per the data which is available in sensor nodes. As compared to SA-EADC and EADC, the simulation results of EECS is better when viewed in consumption of energy and lifetime of a network
INTRODUCTION
Wireless sensor networks [1] (WSN) are the combination of small sensing self-powered sensors that are capable of operating in harsh conditions. Sensors are used to monitor the activities of interest in particular sensing field and collected data is communicated to the base station of the wireless medium. The data is then computed and clubbed together to get the desired result. Sensors can be deployed depending on the type of area if the area is known or sensing field knowledge is known in advance, then we can deploy sensors in a planned manner otherwise, if there is no prior knowledge of sensing field then we need to deploy sensors randomly. WSN finds a large number of important applications like in environment monitoring, battle filed control, healthcare,and weather forecasting. The complex arbitrary nature of clustering protocol[5] makes its design difficult for wireless sensor network technology. The clustering protocol should be designed in such a way that the consumption of energy should be reduced or minimized and also the data communication reliability should be maintained. The cluster nodes transmit the data as packets to the cluster head. The cluster head then fuses the data and forwards it to the sink. The decrease in a number of transmissions reduces the energy consumption and minimizes the data packet redundancy, and thus the bandwidth resources [2] [3] are saved. In cluster sensor network, the energy consumption in sensors defines the lifetime of the network. So if the energy consumption is not balanced, the network lifetime is decreased. Similarly, the network lifetime is increased by distributing uniformly the cluster heads, which in turn balances the energy consumption among the nodes.
The time slots are allotted to membership nodes usingTDMA[6] scheduling, and the cluster heads broadcast the TDMA schedule packets which contain node slot number and membership. In each defined time slot, one sensor transmits data. The nodes communicate directly through a cluster head. The best way to allocate slots to member nodes is that cluster head allocates dynamically on demand of member nodes rather than using uniform slot scheduling. If a node is not allotted a time slot, then the node goes into sleep mode in current session and wats for next session to request cluster head for allocating a time slot to transmit data. Prediction-based energy Efficient Distributed Clustering for WSNthat supports non-uniform node distribution scheduling algorithm is proposed. There are 2 phases of EECS, the cluster setup phase,andsteady-statephase. Local information of nodes is used to decide cluster head by using a newly designed algorithm. For scheduling of time slots, we use Round Robbin to allocate the time slots in steady-state phase.
In the end, we analyze simulation results generated in NS2 and compare those results with existing clustering algorithms.
II. RELATED WORK
In designing a wireless sensor network, the energy consumption is the key issue. For WSN, there are various energy efficient clustering protocols. One clustering based routing protocol for WSN suggested by Yu.et.al [8] which is based on energy-aware clustering algorithm which supports non-uniform node distribution. There is also another algorithm called scheduled activity energy aware distributed clustering [9] (SA-EADC) based on EADC that exploits the redundant sensor and turn them off for the current round. The redundant nodes are scheduled to work on residual energy. To minimize energy consumption "A Local Energy Consumption Prediction Based Clustering Protocol for Wireless Sensor Networks" [10] (LECP-CP) is proposed by taking more accurate and realistic cluster radius into consideration and a new cluster head election mechanism is defined on the basis of local energy consumption of node. T. H. Hsu.et.al [6] proposed "Adaptive Time Division Multiple Access Based Medium Access Control Protocol for Energy Conserving and Data Transmission in Wireless Sensor Networks" based on TDMA-based MAC protocol to save energy and increases data transmission efficiency of nodes in the cluster-based WSN networks. "Unequal cluster-based routing protocol" is another protocol in WSN that was proposed by G. Chen. Et.al [11]. J. Ma.et.al [12] proposed "Energy Efficient TDMA Sleep Scheduling in WSN". "Energy-Efficient Prediction Clustering Algorithm Multilevel Heterogeneous Wireless Sensor Network" given by Jian Peng.et.al [13] and various other protocols.
III. NETWORK MODEL
In the × squaring field we need to deploy n sensors. The following assumptions are taken into consideration while describing the network model.
1. The sensor nodes and base station nodes should be stationary. 2. In a sensing field, there is n number of sensors to beplaced uniformly. 3. In terms of energy and location sensors must be heterogeneous. 4. The data is forwarded to the base station via cluster head in continuous form. 5. Each sensor uses power control process to vary node transmission power.
Radio dissipation model [7] is used for the analysis of energy consumption in all sensor nodes. This model states that in the transmission of the node, the energy consumption depends upon the summation of constant electronic components energy and amplifier energy which is proportional to receiver distance. .In order to transmit a messageto bitsthe consumed energy ( , ) by radio is given by where is energy by electronic components is the onstant, and amount of energy dissipated in transmitter amplifier per bit. ( , ) is the distance between nodes i.e. node i and j. The energy consumed in bit message is given by
IV.
EECS ALGORITHM In this section, we explain the EECS in detail. The algorithm is divided into two phases, the first phase is called as cluster setup phase and the second phase is known as steady set up phase. Cluster setup phase is further divided into four phases: local observation and data collecting phase, cluster head phase, activation phase and cluster formation phase. The steady phase has a sub phase called as transmission phase that consist k sessions with each session having a fixed duration. In every session, we have contention, data transmission and idle periods. We assume that for non-cluster head node communicationwe requiremslots.The TDMA slot is not allotted in current round for cluster head nodes that doesn't have any data to transmit, but nodes having data to transmit are given more time slots. The nodes go into sleep mode when there is no data to transmit thus saving energy. On the basis of local observation function cluster nodes are elected. The local observation function is the mean of average residual energy and average energy consumption of neighboring sensor to residual energy and expected consumption of node itself.
A. Cluster Setup Phase
This phase is further subdivided into 5 sub-phases namely local observation, information gathering phase, cluster head phase, cluster head competition phase, sensor redundancy and cluster formation phase. The duration of each phase is T1, T2, T3, T4, and T5 respectively. The slot allotment algorithm is implemented in Steady State Phase. The table below shows various types of msgs used.
Local observation and information gathering phase.
In this phase,the Node_Msg is broadcasted by each sensor to its neighboring sensor node that lies within its transmission range r and each message contains two fields: sensor id and current energy level of the sensor. Each sensor node receives Node_Msg simultaneously from its neighbor nodes. In order to send messages to its neighboring nodes, each node calculates the athematic mean of residual energy and also the arithmetic mean of consumption of energy of its neighbor node. Each sensor node defines observation (ⅈ) from its neighbouring sensors. The observation function (ⅈ) is given by equation: Where and is residual energy and energy consumption of the node respectively. The waiting time to broadcast Head_Msg is given by: T2 is time duration, Vr is real value uniformly distributed between [0.9-1] that is used to minimize the probability of broadcasting Head_Msg by more than one node simultaneously. The cluster phase flowchart is shown below: Figure 1: Cluster setup algorithm
Competition phase in the cluster head
The phase is started when the time duration T 2 expires in EECS algorithm. During this time, the node itself broadcast the Head_msg to its neighboring nodes in range R c, In this phase, the coverage redundancy and activity of plain nodes are properly checked. Each sensor knows their status whether it isaplain node or a cluster head. The cross coverage [14] checks are executed to eliminate redundancy. In each round, the node is highly active if it has high energy.The timer of the redundant sensor is set proportional to its residual energy. If no Sleep_Msg is transmitted to the redundant node during its expiration time, then a Sleep_Msg is broadcasted by a redundant node in a range of 2Rc and status is set to inactive and node goes to sleep mode for the current round. Otherwise, the active direct neighboring list generated and redundancy check is defined again. The algorithm used in phase 1 is given below:
Algorithm 1: Cluster Setup Algorithm
if it doesn't receive any Head_msg. if the Head_msg is not sent by a node, the sensor node becomes a plain node. when the Head_Msg is broadcasted by node then node waits2 * ∆ R time so that there exists other head Head_Msg broadcasted by neighboring nodes in its range Rc, ∆ denotes the time interval that guarantee that all nodes in cluster range Rc receive Head_Msg but if has does not receive any Head_Msg in time duration ∆ Rthen it will set its state as Head.
Cluster Phase Formation
This is the last phase that starts at time duration T 4
B. Steady State Phase
. For a node,a cluster head is chosen that is closest to it which is based on strength of the signal. The plain nodes simply send a Join_Msg. The time of data transmission for the nodes is calculated by a Schedule_Msg (TDMA Schedule) to all plain nodes within range Rc. The cluster set up phase completes only once the TDMA schedule is known to plain nodes. This phase has many sessions like Contention Period, Data Transmission, Advertisement period and Idle Period in each session. The data transmission period may vary but Data Plus Idle period is fixed. Every node gets active in contention period (CP) and it follows TDMA schedule. Each node can transmit 20 Byte control messageonly in a given specific time slot if it has data otherwise slot remains empty. The Round Robin method is used to implement transmission schedule. Sensors with more observed data can request Cluster Head for more time slots for data transmission. Otherwise, if a sensor node has no data then it can go into sleep mode in order to save energy and more time slots are allocated to nodes having data. The algorithm that runs on Cluster Head for time slot allocation based on Round Robin is given below:
_________________________________________
We assume the number of slots is required equal to a number of plain nodes cluster. The slots are allocated based on RR scheduling. Suppose if a node S1, S2,and S10 need two slots, node S3, S4, S6, S7and S9 have no data to transfer, S5 needs one-time slots and S8 needs three-time slots. This scenario is shown below:
A. Network lifetime
Network lifetime is expressed when entire network is operational. To evaluate the effect of sensing range on network lifetime we set 20m and 25m sensing range for both scenarios. The resultsobtained in our protocol show network lifetime per round is increased as compared to SA-EADC and EADC. We observed that if we increase sensing range in EECS the network lifetime also gets increased. The results show improvement by 7.5% and 11% in scenario 1 when sensing range is 20m as compared to SA-EADC and EADC algorithms respectively. However, if we increase the range to 25m, the network lifetime increases 23% and 27% as compared to SA-EADC and EADC respectively. In scenario 2, when the sensing range sets equal to 20m, network lifetime improved by 7.6% and 10.3% for the sensing range 20m as compared with the SA-EADC and EADC algorithms respectively.
B. Energy Consumption
Energy consumption is defined as the average energy which is consumed by nodes consumption while topology construction, data transmission and sleep mode per rounds. The analysis shows network life is better in EECS as compared to EADC and SA-EADC respectively. In second scenario energy consumption is decreased by 15% and 24% for 20m range as compared to SA-EADC and EADC respectively.
VI. CONCLUSION
In general, from the last decade, there has been a good research in clustering of wireless sensor networks. In this proposed paper we particularly focused on main characteristics of wireless sensor networks like network lifetime and energy consumption. As it was observed that EECS paves way for clustering of WSNs. This protocol leads to energy efficient routing in WSNs. The scalability in WSNs is supported by using scheduling algorithm. Since it reduces communication overhead and energy consumption which is independent of network size, thus becomes suitable for realtimelarge-scale WSNs. In future,we will try to further maximize energy efficiency by choosing the optimal number of clusters. We will also try to develop recovery protocols needed in case the cluster head fails. VII. | 3,213 | 2018-04-20T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Comparative Assessment of Silver Nanocomposites’ Biological Effects on the Natural and Synthetic Matrix
The aim of our investigation was to make a comparative assessment of the biological effects of silver nanoparticles encapsulated in a natural and synthetic polymer matrix. We carried out a comparative assessment of the biological effect of silver nanocomposites on natural (arabinogalactan) and synthetic (poly-1-vinyl-1,2,4-triazole) matrices. We used 144 three-month-old white outbred male rats, which were divided into six groups. Substances were administered orally for 9 days at a dose 500 μg/kg. Twelve rats from each group were withdrawn from the experiment immediately after nine days of exposure (early period), and the remaining 12 rats were withdrawn from the experiment 6 months after the end of the nine-day exposure (long-term period). We investigated the parietal–temporal area of the cerebral cortex using histological (morphological assessments of nervous tissue), electron microscopic (calculation of mitochondrial areas and assessment of the quality of the cell nucleus), and immunohistochemical methods (study of the expression of proteins regulating apoptosis bcl-2 and caspase 3). We found that the effect of the nanocomposite on the arabinogalactan matrix causes a disturbance in the nervous tissue structure, an increase in the area of mitochondria, a disturbance of the structure of nerve cells, and activation of the process of apoptosis.
Introduction
Technical progress in the global nanoindustry is aimed at creating new, highly effective diagnostic and unique therapeutic nanoscale agents. This is possible due to the biospecific properties of nanoparticles "attached" to polymers designed for specific delivery and binding of nanoparticles to biological targets. Realizing their physicochemical and biological effects will raise the degree of solutions of most diagnostic and therapeutic problems to a new level [1][2][3]. Composite materials containing silver nanoparticles have unique properties and are promising for medicine. At the same time, nanosilver retains its inherent universal aseptic properties of silver macroforms. It can exert a specific effect at a minimum dose, which makes it possible to reduce the cost of silver-based drugs and make them available for the treatment of many infectious diseases [4][5][6][7]. The nano-stabilizing efficiency of the matrix has great importance in the formation of silver nanocomposites. The Irkutsk Institute of Chemistry SB RAS synthesized nanobiocomposites on natural (arabinogalactan (AG)) [8] and synthetic (poly-1-vinyl-1,2,4-triazole (PVT) matrices [9]). Nanobiocomposites are biocompatible, highly coordinating, and soluble [10][11][12]. However, the use of these nanocomposites is impossible without preliminary safety studies. First of all, research should study the patterns of their impact on health at the cellular and subcellular levels. In the scientific literature, there is an insufficient number of criteria for assessing the toxic effect of nanocomposites. Prospects for the widespread introduction of nanocomposites containing nanosilver [13] require a timely and in-depth study of their biological effects, including remote ones. A significant number of researchers have evaluated the biological effects of silver nanoparticles in preclinical studies, mainly using acute or subchronic experiments [14][15][16][17], while the possibility of long-term exposure to nanoparticles has not been studied.
The aim of our investigation was to compare the biological effects of silver nanoparticles encapsulated in a natural and synthetic polymer matrix.
Histological Results
In the nAG group, disturbances in the structure of the nervous tissue were revealed: expansion of the perivascular spaces, swelling of vascular bundles, neuronophagy, swelling of myocytes and vascular endotheliocytes, and dark neurons ( Figure 1). These disorders were also recorded in the long-term period ( Figure 2). for assessing the toxic effect of nanocomposites. Prospects for the widespread introduction of nanocomposites containing nanosilver [13] require a timely and in-depth study of their biological effects, including remote ones. A significant number of researchers have evaluated the biological effects of silver nanoparticles in preclinical studies, mainly using acute or subchronic experiments [14][15][16][17], while the possibility of long-term exposure to nanoparticles has not been studied. The aim of our investigation was to compare the biological effects of silver nanoparticles encapsulated in a natural and synthetic polymer matrix.
Histological Results
In the nAG group, disturbances in the structure of the nervous tissue were revealed: expansion of the perivascular spaces, swelling of vascular bundles, neuronophagy, swelling of myocytes and vascular endotheliocytes, and dark neurons ( Figure 1). These disorders were also recorded in the long-term period ( Figure 2).
In animals of the AG group, at all examination periods, a slight expansion of the perivascular spaces and neuronophagy was revealed, which indicates metabolic changes in the structure of cells and tissues ( Figure 3). for assessing the toxic effect of nanocomposites. Prospects for the widespread introduction of nanocomposites containing nanosilver [13] require a timely and in-depth study of their biological effects, including remote ones. A significant number of researchers have evaluated the biological effects of silver nanoparticles in preclinical studies, mainly using acute or subchronic experiments [14][15][16][17], while the possibility of long-term exposure to nanoparticles has not been studied. The aim of our investigation was to compare the biological effects of silver nanoparticles encapsulated in a natural and synthetic polymer matrix.
Histological Results
In the nAG group, disturbances in the structure of the nervous tissue were revealed: expansion of the perivascular spaces, swelling of vascular bundles, neuronophagy, swelling of myocytes and vascular endotheliocytes, and dark neurons ( Figure 1). These disorders were also recorded in the long-term period ( Figure 2).
In animals of the AG group, at all examination periods, a slight expansion of the perivascular spaces and neuronophagy was revealed, which indicates metabolic changes in the structure of cells and tissues ( Figure 3). Microphoto of a section in cerebral cortex nAG-exposed rats (long-term period). 1-Expansion of the perivascular spaces, 2-neuronophagy. Hematoxylin-eosin. Mag. × 400.
In animals of the AG group, at all examination periods, a slight expansion of the perivascular spaces and neuronophagy was revealed, which indicates metabolic changes in the structure of cells and tissues ( Figure 3). Morphological examination of brain tissue preparations from exposed nPVT rats showed only insignificant swelling of the conductive fibers of the animals in the early period of investigation ( Figure 4). In the PVT group only, single changes were observed that did not have multiple confirmations and did not differ from the control. In rats treated with CS, the results of the morphological examination of brain preparations were comparable with the results of the control group ( Figure 5).
Results of Ultrastructural Analysis
Ultrastructural examination of the mitochondrial area in the early and long-term periods revealed a significant increase in the neuronal mitochondria analysis of neurons in Morphological examination of brain tissue preparations from exposed nPVT rats showed only insignificant swelling of the conductive fibers of the animals in the early period of investigation ( Figure 4). In the PVT group only, single changes were observed that did not have multiple confirmations and did not differ from the control. In rats treated with CS, the results of the morphological examination of brain preparations were comparable with the results of the control group ( Figure 5). Morphological examination of brain tissue preparations from exposed nPVT rats showed only insignificant swelling of the conductive fibers of the animals in the early period of investigation ( Figure 4). In the PVT group only, single changes were observed that did not have multiple confirmations and did not differ from the control. In rats treated with CS, the results of the morphological examination of brain preparations were comparable with the results of the control group ( Figure 5).
Results of Ultrastructural Analysis
Ultrastructural examination of the mitochondrial area in the early and long-term periods revealed a significant increase in the neuronal mitochondria analysis of neurons in Morphological examination of brain tissue preparations from exposed nPVT rats showed only insignificant swelling of the conductive fibers of the animals in the early period of investigation ( Figure 4). In the PVT group only, single changes were observed that did not have multiple confirmations and did not differ from the control. In rats treated with CS, the results of the morphological examination of brain preparations were comparable with the results of the control group ( Figure 5).
Results of Ultrastructural Analysis
Ultrastructural examination of the mitochondrial area in the early and long-term periods revealed a significant increase in the neuronal mitochondria analysis of neurons in
Results of Ultrastructural Analysis
Ultrastructural examination of the mitochondrial area in the early and long-term periods revealed a significant increase in the neuronal mitochondria analysis of neurons in the nAG group in all periods. CS, AG, PVT, and nPVT groups did not reveal an increase in the area of mitochondria, which indicates the preservation of the metabolic processes in neurons ( Figure 6). the nAG group in all periods. CS, AG, PVT, and nPVT groups did not reveal an increase in the area of mitochondria, which indicates the preservation of the metabolic processes in neurons ( Figure 6). Ultrastructural analysis of the brain tissue of rats exposed to nAG revealed an increasing dynamic deformation of the neuron nucleus ( Figure 7). An irregular, deformed shape and an increase in the area of mitochondria indicate an unfavorable effect of silver nanoparticles on intracellular structures and are indirect evidence of the ability of nanosilver to penetrate from the polymer matrix into the brain.
Results of the Immunohistochemical Investigation
Analysis of the activity of regulatory proteins in neurons indicated an increase in the expression of anti-and pro-apoptotic proteins in neurons only in the case of the injection of the nanocomposite with the AG matrix. The study of the expression of the apoptosisinhibiting protein factor bcl-2 showed that there was a statistically significant increase in Ultrastructural analysis of the brain tissue of rats exposed to nAG revealed an increasing dynamic deformation of the neuron nucleus ( Figure 7). An irregular, deformed shape and an increase in the area of mitochondria indicate an unfavorable effect of silver nanoparticles on intracellular structures and are indirect evidence of the ability of nanosilver to penetrate from the polymer matrix into the brain. the nAG group in all periods. CS, AG, PVT, and nPVT groups did not reveal an increase in the area of mitochondria, which indicates the preservation of the metabolic processes in neurons ( Figure 6). Ultrastructural analysis of the brain tissue of rats exposed to nAG revealed an increasing dynamic deformation of the neuron nucleus ( Figure 7). An irregular, deformed shape and an increase in the area of mitochondria indicate an unfavorable effect of silver nanoparticles on intracellular structures and are indirect evidence of the ability of nanosilver to penetrate from the polymer matrix into the brain.
Results of the Immunohistochemical Investigation
Analysis of the activity of regulatory proteins in neurons indicated an increase in the expression of anti-and pro-apoptotic proteins in neurons only in the case of the injection of the nanocomposite with the AG matrix. The study of the expression of the apoptosisinhibiting protein factor bcl-2 showed that there was a statistically significant increase in
Results of the Immunohistochemical Investigation
Analysis of the activity of regulatory proteins in neurons indicated an increase in the expression of anti-and pro-apoptotic proteins in neurons only in the case of the injection of the nanocomposite with the AG matrix. The study of the expression of the apoptosisinhibiting protein factor bcl-2 showed that there was a statistically significant increase in the percentage of hyperchromic neurons in the nAG group in comparison with the control, CS, and AG groups in the early period of the examination (Figure 8). Simultaneously, there was a significant increase in the number of normal immunopositive cells and, accordingly, a decrease in normal immunonegative cells. The results obtained indicate the activation of the expression of the apoptosis-inhibiting protein factor and the mobilization of defense mechanisms that prevent the development of apoptosis. In the long-term period, the revealed direction of the changes remained. At the same time, hyperchromic and normal cells immunopositive to bcl-2 with a simultaneous reduction in the number of normal cells without expression to the protein under study were detected much more often than the control and AG groups. the percentage of hyperchromic neurons in the nAG group in comparison with the control, CS, and AG groups in the early period of the examination (Figure 8). Simultaneously, there was a significant increase in the number of normal immunopositive cells and, accordingly, a decrease in normal immunonegative cells. The results obtained indicate the activation of the expression of the apoptosis-inhibiting protein factor and the mobilization of defense mechanisms that prevent the development of apoptosis. In the long-term period, the revealed direction of the changes remained. At the same time, hyperchromic and normal cells immunopositive to bcl-2 with a simultaneous reduction in the number of normal cells without expression to the protein under study were detected much more often than the control and AG groups. (B)-Long-term period. Note: *-the differences are statistically significant compared with the control group at p < 0.01; # -the differences are statistically significant compared to the AG group at p < 0.01;ˆ-the differences are statistically significant compared with the CS group at p < 0.01.
The study of caspase-3 expression revealed statistically significant differences between the nAG and AG groups in both periods (Figure 9). There was a reduction per unit area of the number of normal unchanged cells without expression of the pro-apoptotic protein caspase-3. In contrast, hyperchromic cells and normal cells expressing caspase-3 increased significantly. The revealed results indicate the activation of apoptotic processes immediately after the end of the exposure of the nanobiocomposite. This is consistent with the data on the expression of the apoptosis inhibitor bcl-2, which begins to exert a protective effect simultaneously in response to the activation of the apoptotic process under the influence of nAG. In the long-term period of examination, the number of hyperchromic and normal cells expressing the caspase-3 protein becomes even higher, which indicates an increase in the process of apoptosis. control group at p < 0.01; # -the differences are statistically significant compared to the AG group at p < 0.01; ^the differences are statistically significant compared with the CS group at p < 0.01.
The study of caspase-3 expression revealed statistically significant differences between the nAG and AG groups in both periods (Figure 9). There was a reduction per unit area of the number of normal unchanged cells without expression of the pro-apoptotic protein caspase-3. In contrast, hyperchromic cells and normal cells expressing caspase-3 increased significantly. The revealed results indicate the activation of apoptotic processes immediately after the end of the exposure of the nanobiocomposite. This is consistent with the data on the expression of the apoptosis inhibitor bcl-2, which begins to exert a protective effect simultaneously in response to the activation of the apoptotic process under the influence of nAG. In the long-term period of examination, the number of hyperchromic and normal cells expressing the caspase-3 protein becomes even higher, which indicates an increase in the process of apoptosis. Comparing the expression indices of the two investigated regulatory proteins, it was found that, in the nAG group, at an early stage of the examination, when evaluating the expression of the caspase-3 protein, the number of hyperchromic cells was 1.37 times higher than when evaluating the expression of the bcl-2 protein. The number of normal immunopositive cells expressing bcl-2 increased slightly compared to the AG group, while the number of the same cells expressing caspase-3 was 2.65 times higher. In the long-term follow-up, in the nAG group, the number of hyperchromic and normal immunopositive cells with caspase-3 expression also significantly exceeds the analogous indicators of bcl-2 protein expression (4.5 and 4.4 times, respectively), which indicates a continuous active apoptotic process that suppresses the action of an anti-apoptotic protein. An analysis of the results of the expression of apoptosis-regulating proteins in rats exposed to nPVT did not reveal significant changes in comparison with the introduction of a pure polymeric PVT matrix, indicating the activation of apoptosis in nerve cells throughout the entire observation period.
Discussion
Numerous studies have put an end to the discussion about the possibility of metal nanoparticles penetrating the blood-brain barrier, which restricts the flow of many substrates into the brain. Experimental studies have established alternative changes in the blood-brain barrier and brain tissue with different parenteral routes of entry of metal nanoparticles (Ag, Cu, Al, etc.) [18][19][20][21][22]. The deformation of the nuclei of neurons and an increase in the mitochondrial area revealed in this study, along with disturbances in the structure of the nervous tissue of the cerebral cortex, increased in the long term after exposure to nAG and may have a significant effect on the processes of intracellular metabolism. We recently showed that exposure to gadolinium nanoparticles encapsulated in a polymer matrix of AG leads to an increase in the number of degeneratively altered neurons and neuronophagy in the sensorimotor cortex in rats [23]. In addition, we established the selective cytotoxicity of copper oxide nanoparticles on the AG matrix, causing a decrease in astroglial cells, which can also lead to disruption of the normal homeostasis of the nervous tissue [24]. It is believed that the biological effects of metal nanoparticles can be mediated either by the direct action of nanoscale structures entering the tissues as such or by the influence of ions that can be separated from the surface of the introduced nanostructures [25]. At the same time, it is known that one of the main mechanisms of action of how nanoparticles damage the cell is the generation of radical forms of oxygen and the induction of oxidative damage to DNA in the brain tissue [26,27].
A comparison of the results of a morphological study of the nervous tissue with the data on the expression of caspase-3 and bcl-2 proteins allows us to conclude that nanosilver encapsulated in a polymer matrix arabinogalactan is capable of inducing an apoptotic cascade in neurons of the cerebral cortex, which, after nine-fold administration of the nanobiocomposite, is located on the initial stage of dysregulation of the mechanisms of programmed cell death and gradually, over time, leads to a state of the cell with characteristic signs of an active apoptotic process. Taking into account that the number of hyperchromic cells increases with the introduction of nAG in the long-term, it can be concluded that cell death occurs with the start of the apoptosis program and is caused by other mechanisms of cell damage and death. In our opinion, when programmed cell death is triggered, the mitochondrial pathway of cell entry into apoptosis is quite probable when active caspases formed from procaspases suppress the activity of the anti-apoptotic protein bcl-2. Caspase 3 is one of the end points of the cascade of activation of proteolytic enzymes leading to programmed cell death [28] and the formation of intracellular defense mechanisms.
The activity of the apoptosis process has its own characteristics depending on the type of metal nanoparticles. In recent studies, we did not reveal changes in the expression of caspase-3 and bcl-2 under the influence of iron oxide nanoparticles encapsulated in AG, while a decrease in the total number of neurons in the tissue of the sensorimotor cortex was observed, which makes it possible in this case to exclude the effect of apoptosis mechanisms on neuronal death [29]. Gadolinium nanoparticles on the AG matrix at a similar dose caused a decrease in the expression of bcl-2 in the rat brain [23], which is consistent with the results of Alarifi S. et al. (2017) on the suppression of the expression of Bcl-2 mRNA when exposed to nanoparticles of gadolinium oxide Gd 2 O 3 on the culture of human neuroblastoma cells [30]. A decrease in the activity of this protein makes the cell more susceptible to apoptosis. At the same time, when exposed to Gd 2 O 3 nanoparticles, an increase in the activity of the Bax protein, capable of activating the apoptotic process, was established [30]. The induction of apoptosis and activation of P53-dependent signaling in neurons under the influence of titanium dioxide (TiO 2 ) nanoparticles were revealed [31]. In neuronal stem cells of the CNS, apoptosis can also be caused by zinc oxide nanoparticles [32]. One of the reports showed the role of alumina nanoparticles in development-induced apoptosis against the background of deterioration in the skills of spatial orientation in animals in a maze, which confirms the key role of nanoaluminum in neurotoxic reactions [33].
In our opinion, the appearance of long-term effects of action upon administration of silver nanobiocomposite to rats is due to the long-term persistence of nanoparticles in the body with their insignificant elimination from the body, the ability to accumulate material, and the formation of conglomerates in cell structures and intercellular space. The results obtained gave grounds to conclude that there was no difference in the biological effects of the nanobiocomposite containing nanosilver in the synthetic HTP matrix and the "pure" PVT on rats. PVT and its derivatives, due to the peculiarities of their chemical structure (absence of open chemical bonds, general chemical stability), do not disintegrate into individual components, do not integrate into the chain of biological reactions in the body, and are excreted practically unchanged. We assume that, due to the closed chemical structure, nanosilver does not release from PVT-nanocomposite, does not penetrate the blood-brain barrier, and does not take part in the reactions of cellular metabolism.
Silver Nanocomposites' Preparation and Characterization
All the chemicals were from Favorsky Institute of Chemistry SB RAS (Irkutsk, Russia). Before the introduction, all substances were suspended in distilled water to prepare an initial suspension (1 mg/mL). Nanocomposite nAG contains silver nanoparticles in the zero-valent state of a spherical shape with size 4-8 nm; silver content was 3.1% [34]. Arabinogalactan is a water-soluble white or creamy powder, tasteless and odorless with a patented production technology [8]. The macromolecule of arabinogalactan is represented by the residues of galactose and arabinose.
Nanocomposite nPVT contains spherically shaped silver nanoparticles with sizes 2-6 nm; the silver content in sample was 7.03% [35]. Synthetic polymer PVT is a watersoluble biocompatible polymer with chemical resistance and thermal stability, capable of stabilizing particles of silver nanoparticles in the zero-valent state [9].
All chemicals was dissolved in distilled water, working solutions were prepared on the day of administration.
Animals and Experimental Design
One hundred forty-four three-month-old white outbred male rats (weight 180-200 g) were used for the investigation. The animals were randomly assigned to six groups (n = 24): two groups were exposed to silver nanoparticles encapsulated in natural biopolymer arabinogalactan (group nAG) and synthetic poly-1-vinyl-1,2,4-triazole ( group nPVT) at a dose 500 µg/kg. This dose was chosen based on the results of previous investigations and was 1/10 of LD 50 . Two groups received an aqueous solution of polymers without nanoparticles (groups AG and PVT) in an equivalent volume. Animals of CS group received an aqueous dispersion of colloidal silver, stabilized by casein, with a silver content of 8%. Animals of control group received distilled water. Solutions were administered orally using an atraumatic probe for 9 days.
The investigation was carried out in 2 stages: 12 rats from each group were withdrawn from the experiment immediately after exposure (early period), 12 rats-6 months after the end of exposure (long-term period). The examination included morphological studies of the nerve tissue of the temporoparietal zone of the cerebral cortex, electron microscopy of neurons in the cerebral cortex, and determination of the activity of the proteins caspase 3 and bsl-2.
All animals were kept under 12/12 h light/dark cycle, on a ventilated shelf, and under controlled temperature and humidity conditions (22-25 •
Histological Investigation
To perform morphological studies of the nervous tissue, the animals underwent euthanasia by decapitation. The brain from each animal under study was removed and fixed in neutral buffered formalin solution (10%), dehydrated with ascending concentrations of ethanol (70, 80, 90, 95, and 100%), and placed in a homogenized paraffin medium for histological studies HistoMix (BioVitrum, Russia). Then, using an HM 400 microtome (Microm, Germany), serial horizontal sections with a thickness of 4-5 µm were made at Bregma-6.10 mm level, Interaural-3.90 mm, which were stained on ordinary histological slides with hematoxylin-eosin for observation microscopy [36].
Electron Microscopy Investigation
Electron microscopy was used for ultrastructural assessment of the state of neurons in the cerebral cortex. The studies were carried out using a Leo 906E electron microscope (Zeiss, Germany). The number and cross-sectional area of mitochondria and state of neuronal nuclei were determined at different periods of the investigation.
Immunohistochemical Investigation
An immunohistochemical method was used to determine the activity of caspase 3 and bsl-2 proteins. Sections obtained on a microtome were placed on poly-L-lysine coated slides (Menzel-Gläser, Braunschweig, Germany) and stained for antibodies to the caspase-3 protein and antibodies to the bsl-2 protein (Monosan, Uden, Netherlands) in accordance with the protocol proposed by the manufacturer. Visualization of stained and fixed micropreparations was carried out using a light-optical research microscope Olympus BX 51 (Olympus, Tokio, Japan) with microimages input into a computer using an Olympus camera. The analysis of the obtained photographic materials was carried out using the Image Scope S system (SMA, Moscow, Russia). The following analysis parameters were selected: number of immunopositive and immunonegative normal neurons and hyperchromic neurons. Cells stained for antibodies to caspase-3 and bsl-2 proteins were immunopositive, and unstained cells were immunonegative. Cells without a well-defined nucleus were considered hyperchromic, which is a sign of damage. The number of cells was determined per unit area of the histological preparation (0.2 mm 2 ).
Statistical Analyses
Statistical analysis of the research results was carried out using the Statistica 6.1 software package (Statsoft, Tulsa, OK, USA). The Shapiro-Wilk W-test was used to decide the type of feature distribution. To compare groups, we used the Mann-Whitney U-test. Null hypotheses about the absence of differences between the groups were rejected at the achieved significance level of p ≤ 0.05.
Conclusions
A comparative analysis of the biological effects of silver nanoparticles encapsulated in various stabilizing matrices revealed the features of their effect. Pathological abnormalities expressed in the structure of the temporoparietal zone of the sensorimotor cortex of the rat brain, increasing over time, were found in the nanobiocomposite of silver nanoparticles and the natural polysaccharide arabinogalactan. At the same time, the introduction in a similar mode and dose of silver nanoparticles encapsulated in a synthetic matrix of poly-1vinyl-1,2,4-triazole did not lead to any noticeable changes in the studied parameters early on or in the long-run. Features of the biological effects of silver nanoparticles encapsulated on various matrices can be used in medical research to reduce the adverse effects of silver nanoparticles.
Data Availability Statement:
The data presented in this study are available from the corresponding author upon request.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,185.2 | 2021-12-01T00:00:00.000 | [
"Biology"
] |
Functional Regulation of Pre-B-cell Leukemia Homeobox Interacting Protein 1 (PBXIP1/HPIP) in Erythroid Differentiation*
Background: HPIP is a pre-B-cell leukemia homeobox 1 (PBX1) interacting protein with unknown function in hematopoiesis. Results: The HPIP gene is a target of GATA1 and CTCF and regulates erythroid differentiation involving PI3K/AKT-dependent mechanisms. Conclusion: HPIP is a novel downstream target of GATA1 and serves as an essential regulator of erythroid differentiation. Significance: A new regulator of erythroid differentiation is discovered. This finding may help in better understanding erythropoiesis. Pre-B-cell leukemia homeobox interacting protein 1 or human PBX1 interacting protein (PBXIP1/HPIP) is a co-repressor of pre-B-cell leukemia homeobox 1 (PBX1) and is also known to regulate estrogen receptor functions by associating with the microtubule network. Despite its initial discovery in the context of hematopoietic cells, little is yet known about the role of HPIP in hematopoiesis. Here, we show that lentivirus-mediated overexpression of HPIP in human CD34+ cells enhances hematopoietic colony formation in vitro, whereas HPIP knockdown leads to a reduction in the number of such colonies. Interestingly, erythroid colony number was significantly higher in HPIP-overexpressing cells. In addition, forced expression of HPIP in K562 cells, a multipotent erythro-megakaryoblastic leukemia cell line, led to an induction of erythroid differentiation. HPIP overexpression in both CD34+ and K562 cells was associated with increased activation of the PI3K/AKT pathway, and corresponding treatment with a PI3K-specific inhibitor, LY-294002, caused a reduction in clonogenic progenitor number in HPIP-expressing CD34+ cells and decreased K562 cell differentiation. Combined, these findings point to an important role of the PI3K/AKT pathway in mediating HPIP-induced effects on the growth and differentiation of hematopoietic cells. Interestingly, HPIP gene expression was found to be induced in K562 cells in response to erythroid differentiation signals such as DMSO and erythropoietin. The erythroid lineage-specific transcription factor GATA1 binds to the HPIP promoter and activates HPIP gene transcription in a CCCTC-binding factor (CTCF)-dependent manner. Co-immunoprecipitation and co-localization experiments revealed the association of CTCF with GATA1 indicating the recruitment of CTCF/GATA1 transcription factor complex onto the HPIP promoter. Together, this study provides evidence that HPIP is a target of GATA1 and CTCF in erythroid cells and plays an important role in erythroid differentiation by modulating the PI3K/AKT pathway.
Pre-B-cell leukemia homeobox interacting protein 1 or human PBX1 interacting protein (PBXIP1/HPIP) is a co-repressor of pre-B-cell leukemia homeobox 1 (PBX1) and is also known to regulate estrogen receptor functions by associating with the microtubule network. Despite its initial discovery in the context of hematopoietic cells, little is yet known about the role of HPIP in hematopoiesis.
Here, we show that lentivirus-mediated overexpression of HPIP in human CD34 ؉ cells enhances hematopoietic colony formation in vitro, whereas HPIP knockdown leads to a reduction in the number of such colonies. Interestingly, erythroid colony number was significantly higher in HPIP-overexpressing cells. In addition, forced expression of HPIP in K562 cells, a multipotent erythro-megakaryoblastic leukemia cell line, led to an induction of erythroid differentiation. HPIP overexpression in both CD34 ؉ and K562 cells was associated with increased activation of the PI3K/AKT pathway, and corresponding treatment with a PI3K-specific inhibitor, LY-294002, caused a reduction in clonogenic progenitor number in HPIP-expressing CD34 ؉ cells and decreased K562 cell differentiation. Combined, these findings point to an important role of the PI3K/AKT pathway in mediating HPIP-induced effects on the growth and differentiation of hematopoietic cells. Interestingly, HPIP gene expression was found to be induced in K562 cells in response to erythroid differentiation signals such as DMSO and erythropoietin. The erythroid lineage-specific transcription factor GATA1 binds to the HPIP promoter and activates HPIP gene transcription in a CCCTC-binding factor (CTCF)-dependent manner. Co-immunoprecipitation and colocalization experiments revealed the association of CTCF with GATA1 indicating the recruitment of CTCF/GATA1 transcription factor complex onto the HPIP promoter. Together, this study provides evidence that HPIP is a target of GATA1 and CTCF in erythroid cells and plays an important role in erythroid differentiation by modulating the PI3K/AKT pathway. The human hematopoietic system is composed of a heterogeneous population of cells that range in function from mature cells with limited proliferative potential to pluripotent stem cells known as hematopoietic stem cells (HSC) 4 with extensive proliferation, differentiation, and self-renewal capacities (1,2). This process is governed by the interplay of a number of transcription factors and various signaling pathways, which altogether facilitate proper hematopoietic development (3,4). Emerging evidence indicates that human leukemias, lymphomas, and possibly myelodysplastic syndromes are initiated at the level of HSCs and/or early multipotent progenitors that have been transformed due to genetic/chromosomal aberrations or deregulation of gene expression (5). Of several regulators of HSC, PBX transcription factors play an important role in the establishment and maintenance of definitive hematopoiesis, and PBX overexpression has been linked to leukemia development (6). PBX proteins mainly act as cofactors for HOX pro-teins (7). Particularly, PBX1 together with HOX genes are essential for normal HSC development, and its deregulation led to leukemogenesis (8,9). Ablation of the Pbx1 gene in mice causes an embryonic lethal phenotype with severe homeotic malformations, hypoplasy (or absence) of many organs, but also lymphoid, myeloid, and erythroid deficiencies (10). Therefore, understanding the protein regulatory network linked to PBX1 is important for normal hematopoiesis as well as leukemia development.
In an attempt to map the interactome of PBX1, we have previously identified human PBX interacting protein (HPIP), also known as pre-B-cell leukemia homeobox interacting protein 1 (PBXIP1), as a PBX1 interacting protein through a yeast two-hybrid approach employing a human hematopoietic cDNA-based library (11). HPIP is a nucleo-cytoplasmic shuttling protein (12). HPIP also interacts with PBX2 and PBX3. HPIP inhibits the ability of PBX-HOX heterodimers to bind to target sequences. Moreover, HPIP strongly inhibits the transcriptional activation capacity of E2A-PBX suggesting HPIP is a newly recognized regulator of PBX function (11). The same study also reported that similar to many HOX family members, HPIP is expressed in the most primitive hematopoietic stem cell-enriched CD34 ϩ population, whereas its expression is found very low in terminally differentiating CD34 Ϫ hematopoietic populations (11).
Recent studies have also revealed the role of HPIP in cell migration and proliferation in breast cancer cells (13,14). Mouse xenograft studies and anchorage-independent growth assays demonstrated the oncogenic nature of HPIP (13). HPIP regulates these functions by activation of PI3K/AKT and Src/ MAPK pathways (13). Accumulating evidence supports that the PI3K/AKT signaling pathway is also a key player in developmental hematopoiesis, hematopoietic stem cell survival, and self-renewal (15)(16)(17). For example, PI3K/AKT transduces Srcinduced erythroid cell differentiation (18). In addition, AKT, which is one of the main downstream targets of PI3K, mediates erythropoiesis in response to erythropoietin signaling by controlling GATA1 transcriptional activity on the TIMP-1 gene (19). In particular, DMSO-induced erythroid differentiation is dependent on PI3K activity (20). Altogether, these reports support the central role for the PI3K/AKT pathway in erythropoiesis.
Based on HPIP being expressed in primitive human hematopoietic stem and progenitor cell-enriched CD34 ϩ populations, a close relationship of HPIP with PBX functions, and a demonstrated role for HPIP in regulating the PI3K/AKT signaling pathway, we hypothesized that HPIP has important functional roles in hematopoiesis. To test this, we have employed overexpression and knockdown of HPIP in primary human CD34 ϩ cells and in K562 cells and assessed the impact on colony formation and differentiation. These functional and molecular studies demonstrate that HPIP expression is induced in response to erythroid differentiation inducers such as DMSO and erythropoietin (Epo) and regulates erythroid differentiation by activating the PI3K/AKT pathway.
EXPERIMENTAL PROCEDURES
Cell Culture-The human leukemic cell lines K562 and HL60 were obtained from the American Type Culture Collection (Manassas, VA). Cells were maintained in RPMI 1640 medium supplemented with 10% fetal bovine serum (FBS), 2 mM L-glutamine, 10 units/ml penicillin, and 10 g/ml streptomycin.
Plasmids-To study the regulation of HPIP gene expression, we amplified by PCR an ϳ2.3-kb 5Ј-flanking region of the HPIP gene using BAC clone ID RP11-307C12 (gift from J. D. Shaughnessy, Jr., University of Arkansas for Medical Sciences) as template using specific primers as follows: forward primer, 5Ј-ATGCCTCGAGACTAATCTAGAAGGAATG-3Ј (XhoI site), and reverse primer, 5Ј-ATGCAAGCTTAGGAGGCCATAG TTGCTG-3Ј (HindIII site). The PCR fragment was subsequently cloned into a promoter probe vector, pGL3. The cloned promoter region was sequence-verified and found correctly located between the Ϫ154,934and Ϫ154,937-kb region on human chromosome 1q21.3 (Fig. 6A). In humans, the HPIP gene is located upstream of the PYGO2 gene and downstream of the PMVK gene on chromosome 1q21.3. Using web-based PROSCAN suite 7.1 software, we have identified the annotated (putative) transcription start site that is located at Ϫ740 bp from the start codon ATG (supplemental Fig. S5), and the TATA box is located at Ϫ769 bp. The cDNA that encodes HPIP was amplified by PCR with the following the primers using pMIG-HPIP plasmid as template: forward primer, 5Ј-TGG-CCAATTGCCACCATGGACTACAAAGAC-3Ј (underlined sequence denotes MfeI site; sequence in italics encode partial sequence of FLAG tag), and reverse primer, 5Ј-AGT-CATGCATTCAGCCCCGTGTGTGGTG 3Ј. HPIPshRNA in pGIPz vector was provided by Dr. Sam Aparicio, British Columbia Cancer Agency, University of British Columbia, Vancouver, Canada.
Induction of K562 and G1E-ER4 Differentiation-To induce K562 differentiation, cells at a density of 10 5 cells per ml were treated with DMSO (1.6%) or sodium butyrate (1.5 mM), as indicated. To examine the effects of Epo on K562 cells, cells were grown in the presence or absence of EPO (5-15 units/ml) in RPMI 1640 medium supplemented with heat-inactivated 10% FBS for various time points. For inhibitor studies, cells were treated with PI3K inhibitor LY-294002 (50 M) as indicated. G1E-ER4 cells were cultured as described previously (21) and induced with 4-hydroxytamoxifen (4-OHT) (10 Ϫ8 M) whenever required for GATA1 induction.
Isolation and Lentiviral Transduction of Cord Blood-derived CD34 ϩ Cells-To isolate CD34 ϩ cells, cord blood was obtained from the stem cell assay lab (Terry Fox Laboratory, British Columbia Cancer Agency). CD34 ϩ cell-enriched populations (65-98% CD34 ϩ cells) were obtained by positive selection using magnetic beads (Easy Sep Stem Cell Technologies Inc., Vancouver, Canada). Purified CD34 ϩ cells were stimulated overnight for 48 h for in vitro experiments at densities less than or equal to 2 ϫ 10 5 cells/ml in Iscove's medium supplemented with 1% BSA, 10 g/ml bovine pancreatic insulin, and 200 g/ml human transferrin (BIT; Stem Cell Technologies Inc.), 10 Ϫ4 mol 2-mercaptoethanol, 2 mM glutamine, 100 ng/ml FL-3 (Immunex Corp.), 100 ng/ml steel factor, 50 ng/ml thrombo-poietin (Genentech Inc.), and 100 ng/ml hIL-6 as described previously (22). The following day, the cells were pelleted, resuspended in fresh growth factor-supplemented medium with 5 g/ml protamine sulfate and lentivirus with 0.5 ϫ 10 8 to 5 ϫ 10 8 infectious units/ml, placed in a 96-well plate coated with 5 g/cm 2 fibronectin (Sigma), and then incubated at 37°C for 6 h. Lentivirus was produced for pMNDUS vector, pMNDUS-HPIP, pGIPz control shRNA vector, and pGIPz-HPIPshRNA constructs using a standard four-plasmid packaging system by calcium phosphate transfection method in HEK293T cells. Harvested virus-containing supernatants were concentrated by two rounds of ultracentrifugation ϳ1000-fold to achieve titers of 0.5 ϫ 10 9 to 1 ϫ 10 9 infectious units/ml. Viral titers were determined using transduction into HeLa cells followed by FACS analysis.
Generation of HPIP Stable Clones in K562 Cells Using Lentivirus Transduction-For ectopic expression of HPIP in K562, cells were transduced with lentivirus carrying pMNDUS vector or pMNDUS-HPIP with 0.5 ϫ 10 8 infectious units/ml, placed in a 6-well coated plate, and then incubated at 37°C for 24 h. GFP-positive cells were isolated using FACS and used for various assays.
Cell Proliferation Analysis-For analyzing proliferation activity, cells were cultured in growth medium at a starting density of 5000 cells in a 96-well plate, and growth of cells was quantified by MTT assay for indicated time points. Cell counts were performed in quadruplicate every 24 h using a plate reader.
Cell Differentiation by Benzidine Staining Assay-Erythroid differentiation was assayed by the method of Orkin by benzidine staining of hemoglobin accumulated in the cells. Stable clones of HPIP in K562 cell suspension (200 l) was mixed with 20 l of freshly prepared benzidine solution (10:1 mixture of 0.2% 3,3-dimethoxybenzidine in 0.50 M acetic acid and 30% hydrogen peroxide), and stained cells were scored under a microscope. At least 400 cells were examined (in duplicate) at each assay.
Western Blot Analysis-For Western blot analysis, cells were harvested by centrifugation and washed once with phosphatebuffered saline (PBS). Cells were lysed in RIPA buffer (20 mM Tris, pH 7.5, 150 mM NaCl, 1 mM EGTA, 1% Nonidet P-40, 1% deoxycholate, 1 mM phenylmethylsulfonyl fluoride, and complete mini protease inhibitor mixture (Roche Applied Science)) and were centrifuged to remove cell debris. To study phosphor-ylated proteins, cell lysates were prepared in RIPA buffer supplemented with phosphatase inhibitor mixture (Sigma). The protein concentration was determined by the RC-DC protein assay (Bio-Rad), and each lysate containing 50 -70 g of protein was loaded and resolved on an SDS-polyacrylamide gel and transferred to nitrocellulose membrane (Invitrogen) and then probed with specific antibodies. After incubation with HRPconjugated secondary antibodies (GE Healthcare), the blots were visualized with chemiluminescence (ECL) detection reagents (Bio-Rad) followed by autoradiogram using Kodak developing system or by Versadoc imaging system (Bio-Rad). Western blotting was performed using antibodies against HPIP from Bethyl Laboratories; GAPDH, acetyl-H3K4, and CTCF from Millipore; GATA1, phospho-AKT Ser-473, phosphor-GSK3, total AKT, and total GSK3 from Cell Signaling Technologies; C/EBP␣ from Santa Cruz Biotechnology and Alexis; and FLAG from Sigma.
Co-immunoprecipitation Assay-For immunoprecipitation, K562 cells were lysed in lysis buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 3 mM MgCl 2 and 1% Nonidet P-40). Approximately 1 mg/ml cell lysate were precleared with protein A/G beads for 1 h at 4°C. Immunoprecipitation was then done overnight at 4°C using 1 g of antibody/mg of protein. Complexes were collected with protein A/G beads for 1 h at 4°C. After extensive washing with lysis buffer and one time with phosphate-buffered saline (PBS), proteins were detected and separated on 8% SDS-PAGE, and Western blot was performed using specific antibodies.
HPIP Expression in Hematopoietic Tissues and Myeloid Cell
Lines-We have previously documented HPIP expression at the transcript level in K562 and HL60 leukemic cell lines and also in hematopoietic stem progenitor-enriched CD34 ϩ cells (11). Consistent with our previous report, Western blot analysis (Fig. 1A) revealed HPIP expression in various myeloid cell lines, HL60, K562, THP-1, and U-937 cells and in human cord blood CD34 ϩ cells. In addition, real time qPCR analysis showed HPIP expression in mouse hematopoietic tissues such as spleen, bone marrow, and thymus (Fig. 1B). HPIP expression was also detected in purified mouse common myeloid progenitors and common granulocyte and macrophage progenitors (Fig. 1C). In support of these results, gene expression data deposited in the human protein atlas data bank also showed HPIP expression in several hematopoietic cell lines as well as in hematopoietic organs, which include bone marrow, spleen, tonsils, and lymph node (supplemental Fig. S1) (24). These initial findings of HPIP expression in hematopoietic organs and also in hematopoietic cells strongly suggest a possible role for HPIP in hematopoiesis.
HPIP Is a Positive Regulator of Colony-forming Cell (CFC) Activity of CD34 ϩ Cells-To evaluate the effects of HPIP expression on hematopoietic differentiation and lineage commitment, we carried out both HPIP ectopic (over)expression and knockdown studies using human CD34 ϩ cells. For overexpression, we utilized a lentiviral delivery system confirmed to yield readily detectable levels of HPIP expression in HeLa cells by Western analysis using HPIP antibody ( Fig. 2A). To assess the effects of HPIP knockdown in CD34 ϩ cells, we employed an HPIP shRNA lentiviral construct (pGIPz-HPIPshRNA) or control shRNA construct (pGIPz-control shRNA) and confirmed strong suppression (ϳ90%) in transduced HeLa cells (Fig. 2B).
First, we examined whether engineered overexpression/ectopic HPIP modulated the number of clonogenic progenitors using the CFC. Transduced GFP ϩ CD34 ϩ cells were isolated by FACS 48 h post-infection and assayed for clonogenic progenitor content in methylcellulose. HPIP-expressing progenitor cells formed 159 (S.E. 7.0) colonies/500 cells initially plated versus 98 (S.E. 19.7) colonies in the vector control or 104.5 colonies (S.E. 4.9) in untransduced control (Fig. 2C). Interestingly, the lineage distribution of colonies was also significantly altered with more erythroid colonies (BFU-E plus CFU-E) in the HPIPtransduced versus the control-transduced cells (110/500 cells initially plated in HPIP ϩ versus 27.5/500 cells initially plated in the vector control arm or 33.5/500 initially plated in the untransduced control arm) (Fig. 2D). Similarly significant differences in the formation of CFU-GM and CFU-GEMM colonies were also observed (Fig. 2D). Furthermore, the cells derived from these colonies were assessed by FACS analysis for glycophorin A, erythroid-specific marker, and CD33, a myeloid cell marker, expression. HPIP-transduced CD34 ϩ cells showed ϳ48% of glycophorin A-positive cells versus 28% vector control cells (supplemental Fig. S2). Next, the effect of HPIP knockdown on clonogenic capacity of CD34 ϩ cells was evaluated. Consistent with the above results, HPIP knockdown resulted in decreased CFC number of 70.5 (S.E. 2.12) colonies/500 cells initially plated versus 130 (S.E. 0.7) colonies in the vector con-trol or 123 (S.E. 5.6) colonies in untransduced control (Fig. 2, E and F). Together, these results point to an important functional role for HPIP in hematopoietic progenitor function and notably as a positive regulator at the level of erythroid progenitors.
HPIP Expression Alters Erythroid Differentiation Potential of K562 Cells-Because HPIP expression shows a distinct stimulatory effect on BFU-E, we sought to test whether HPIP expression alters erythroid differentiation in K562 cells. K562, a multipotent erythro-megakaryoblastic leukemia cell line, has been used to study in vitro erythroid differentiation as it can be differentiated into mature erythroid or myeloid cells (25). To further examine whether HPIP expression alters erythroid differentiation, we used lentiviral transduc- tion with HPIP or shRNA to HPIP to generate K562 clones in which HPIP was stably overexpressed or knocked down. Stable transformants were sorted using GFP as a tracker by FACS, and HPIP overexpression or knockdown was confirmed by Western analysis (Fig. 3, A and B, respectively). Overexpression of HPIP did not alter K562 cell proliferation significantly as assessed by MTT assay up to day 3 of monitoring (Fig. 3C). HPIP knockdown increased the proliferation capacity of K562 starting from day 2, and this was sus-tained until day 4 (Fig. 3D). Using a benzidine staining assay to measure hemoglobin accumulation as a measure of differentiation, we detected a significant increase in erythroid differentiation in HPIP-overexpressing cells (Fig. 3E). Conversely, HPIP knockdown resulted in a significant decrease, some 2-fold, in DMSO-induced differentiation of K562 cells (Fig. 3F). These results reinforce a model in which HPIP plays an important positive regulatory role in erythroid differentiation. FEBRUARY 17, 2012 • VOLUME 287 • NUMBER 8
HPIP-mediated Erythroid Differentiation Follows PI3K/AKT
Pathway-The PI3K/AKT pathway has been well documented to play an important role in erythroid differentiation (16,18). AKT, which is the downstream target of PI3K, requires PI3Kdependent phosphorylation at Ser-473 for its optimal activity (26). Upon growth factor signaling, AKT phosphorylation of GSK3 at Ser-9 leads to reduced GSK3 activity (27). Based on the fact that HPIP activates the PI3K/AKT pathway (13), we predicted that HPIP-mediated differentiation of K562 cells was linked to the PI3K/AKT pathway. To test this, we examined the activation of downstream targets of PI3K signaling such as AKT and GSK3 in K652-HPIP and K562-HPIPshRNA cells by Western analysis. Ectopic expression of HPIP substantially increased the AKT and GSK3 phosphorylation over control cells indicating the functional activation of the PI3K/AKT pathway in K562 cells similar to MCF7 cells as reported previously (Fig. 4A) (13). Conversely, knockdown of HPIP substantially decreased the phosphorylation of AKT and GSK3 in K562 cells over control cells (Fig. 4B) indicating the requirement of HPIP for the activation of PI3K/AKT pathway in K562 cells.
Chemical and Physiological Modulators of Erythroid Differentiation Induce HPIP Expression in Leukemic Cell
Lines-Because HPIP expression influenced erythroid differentiation in K562 cells, we sought to study its regulation in these cells. Treatment with the differentiation inducer DMSO (1.6%) led to enhanced HPIP expression as revealed by immunoblotting (Fig. 5A). Further real time quantitative RT-PCR analysis confirmed that HPIP RNA levels increased by nearly 5-fold in response to inducer treatment (Fig. 5A, lower panel). Similar results were also obtained in HL60, another leukemic cell line, indicating HPIP induction by DMSO is not restricted to K562 cells (Fig. 5B). However, treatment with sodium butyrate did not change the expression of HPIP significantly in K562 or in HL60 (supplemental Fig. S4).
Next, we tested whether Epo was also capable of inducing HPIP expression. K562 cells were treated with various concentrations of erythropoietin, ranging from 1 to 15 units/ml, and HPIP expression was verified by Western blot analysis. As shown in Fig. 5C, HPIP protein levels increased significantly by Epo at 5 units/ml concentration in parallel to induction of GATA1, which is a known inducer of Epo signaling. This result is further confirmed by real time quantitative PCR analysis (Fig. 5D). Next, we carried out time-dependent treatment with Epo in K562 cells. HPIP expression was induced after 4 h of treatment and then slowly decreased as shown in Fig. 5E. Together, these results suggest that chemical modulators of erythroid differentiation such as DMSO and physiological inducer of erythropoiesis induce HPIP expression in leukemic cell lines.
E/Meg Transcription Factors Activate HPIP Gene Transcription-Because erythropoietin induced HPIP expression in K562 leukemic cells, we sought to elucidate the mechanism of HPIP gene expression in hematopoietic cells. Sequence inspec-tion of the human HPIP 2.3-kb promoter region using TF search tool revealed 16 GATA1-2 consensus binding sites ( Fig. 6A and supplemental Fig. S6). In addition, we also found several other E/Meg-binding sites, which include C/EBP␣ and SCL (Tal-1). Next, to check whether these E/Meg transcription factors indeed activate HPIP gene transcription, we cloned the 2.3-kb 5Ј-flanking regions of the HPIP gene into the promoter probe vector, pGL3, as described under "Experimental Procedures," and performed luciferase assays using co-transfection studies in K562 cells. As shown in Fig. 6B, all known E/Meg transcription factors, which include GATA1-2, C/EBP␣, SCL, PU1, Gif-1, and Fli-1, activated HPIP gene transcription. GATA-1, GATA-2, and C/EBP␣ were notably effective in the activation of HPIP gene transcription by nearly 130-, 6-, and 28-fold over control vector, respectively. These results suggested E/Meg transcription factors activate HPIP gene transcription in K562 cells.
GATA1 Activates HPIP Transcription in G1E-ER4 Cells-Because GATA1 showed the highest transcriptional activity on the HPIP promoter, we further validated the above findings in GIE-ER4 cells. G1E-ER4 cell lines are derived from GATA1deficient ES cells but stably express a conditional form of GATA1 upon exposure to -estradiol or 4-OHT (21). As reported previously, treatment of G1E-ER4 cells with 4-OHT FEBRUARY 17, 2012 • VOLUME 287 • NUMBER 8
JOURNAL OF BIOLOGICAL CHEMISTRY 5607
induced GATA1 accumulation (Fig. 7A). As predicted, HPIP protein levels were increased in parallel to GATA1 protein accumulation upon treatment with 4-hydroxytamoxifen in G1E-ER4 cells. Consistent with Western data, quantitative RT-PCR analysis also showed HPIP mRNA synthesis along with -globin, which is a known target gene for GATA1 (Fig. 7, B and C), demonstrating GATA1-mediated induction of HPIP gene expression in erythroid cells.
GATA1 and C/EBP␣ Are Recruited to HPIP Chromatin and Activate Its Gene Transcription-Because GATA1 and C/EBP␣ showed strong transcriptional activity on the HPIP promoter as shown in luciferase assays, we next examined whether they regulate HPIP gene expression by recruitment to the HPIP chromatin locus. Because the HPIP promoter region contains 16 GATA1-and 4 C/EBP␣-binding sites, we designed the primers in such a way that the amplified region covers at least one binding site for C/EBP␣ and 1-3 binding sites for GATA1 (Fig. 8A). K562 cells were either untreated or treated with DMSO for 24 h or with Epo for 4 h, and then cell lysates were analyzed by ChIP assay followed by qPCR. As shown in Fig. 8B, both GATA1 and C/EBP␣ readily recruited to all four regions of the HPIP promoter (HPIP-PR1, HPIP-PR2, HPIP-PR3, and HPIP-PR4). However, upon treatment with DMSO, GATA1 appears to slightly dissociate from the HPIP promoter region compared with untreated samples, but C/EBP␣ binding is enriched by 3-4-fold at promoter regions 2-4 and completely dissociates from region 1 (HPIP-PR1). Interestingly, GATA1 binding is enriched by 2-fold in region 1 upon Epo treatment but slightly reduced at regions 3 and 4 compared with untreated samples (Fig. 8C). GATA1 occupancy at region 2 is more or less unchanged. In contrast, Epo treatment did not detectably influence C/EBP␣ binding onto HPIP chromatin region 1, but binding at regions 2-4 is decreased (Fig. 8C). These results indicate the differential recruitment of GATA1 and C/EBP␣ onto the HPIP promoter upon treatment with DMSO and Epo. Next, we checked the active chromatin status of the HPIP promoter upon DMSO and Epo treatment. ChIP analysis of K562 cells treated with either DMSO or Epo showed enrichment of his- tone acetylation at lysine 4 (H3K4Ac) (Fig. 8D). Together, these results indicate differential regulation of HPIP gene transcription by GATA1 and C/EBP␣ in response to cell differentiation signals such as DMSO or Epo in K562 cells.
CTCF and GATA1 Coordinatively Regulate HPIP Gene Expression-Bioinformatic analysis of the 5Ј-flanking region of the HPIP gene further revealed the presence of CTCF, a chromatin insulator, -binding sites on HPIP promoter region (supplemental Fig. S7). Because CTCF, a genomic insulator protein, is one of the critical regulators of -globin gene expression and participates in erythroid differentiation (28 -30), we sought to test whether CTCF also regulates HPIP gene transcription. To address this, we treated K562 cells with Epo (5 units/ml) for 4 h, and cell lysates were subjected to ChIP analysis using CTCF antibody. As shown in Fig. 9A, Epo treatment enhanced CTCF binding to the HPIP promoter regions 1 and 2 (HPIP-PR1 and HPIP-PR2) but not regions 3 and 4. Furthermore, CTCF knockdown in K562 cells by CTCF-specific siRNA also affected the HPIP expression in response to Epo treatment (Fig. 9B) indicating CTCF acts as a positive regulator of HPIP gene expression in K562 cells. Next, we checked CTCF requirement for GATA1mediated HPIP gene activation. CTCF siRNA or control siRNA along with a GATA1 expression plasmid and a HPIP promoter-Luc plasmid were transfected into K562 cells, and luciferase activity was measured. As shown in Fig. 9C, CTCF knockdown reduced the transcriptional activity of GATA1 by ϳ50% over control siRNA-transfected cells. Moreover, co-immunopre-cipitation analysis revealed a likely direct interaction of CTCF with GATA1 in K562 (Fig. 9D). Furthermore, to support these results, we carried out co-localization studies using fluorescence microscopy in HeLa cells. Transiently transfected T7-tagged GATA-1 localized to both cytoplasmic and nuclear compartments, whereas CTCF localized only to the nucleus (Fig. 9E). Nuclear co-localization of GATA1 with endogenous CTCF in HeLa cells suggests a coordinated role in HPIP gene regulation.
Because CTCF binding to DNA is methylation-sensitive, we treated K562 cells with the DNA methylation inhibitor, decitabine, to check if HPIP expression is altered. Indeed, treatment with decitabine increased HPIP protein levels as demonstrated by Western analysis (Fig. 9F). Furthermore, luciferase assay data showed increased promoter activity upon decitabine treatment (Fig. 9G). Together these results suggest that CTCF regulates HPIP gene transcription probably by promoting the formation of an active transcription complex with GATA1 in a DNA methylation-sensitive manner.
DISCUSSION
Although HPIP is reported to be expressed in primitive human hematopoietic stem/progenitor cell-enriched CD34 ϩ populations and to act as a co-repressor for pre-B-cell leukemia transcription factor PBX1 (11), its role in hematopoiesis and cell differentiation is not explored. In this study, we attempted to characterize HPIP functions in hematopoiesis in vitro through lentiviral gene transfer into cord blood cells and also its potential role in cell differentiation using K562 cells as a model system. Constitutive expression of HPIP in hematopoietic stem cells significantly increased the frequency of clonogenic progenitors (CFC) (Fig. 2). In particular, HPIP ectopic expression in CD34 ϩ cells increased the number of erythroid colonies. These data are consistent with our cell differentiation assays carried out in K562 as a model system where ectopic expression of HPIP in K562 cells also promoted erythroid differentiation, whereas K562 cell proliferation was unaffected, suggesting that HPIP promotes erythroid cell differentiation. The arrest of cell proliferation observed in K562 cells after HPIP transduction may be the consequence of entry of the cells into a differentia-tion program. Our previous studies demonstrated the inhibitory activity of HPIP on PBX-HOX heterodimers to bind to target sequences and the co-repressor activity on transcriptional activation capacity of E2A-PBX suggesting opposing roles of HPIP on PBX1 functions (11). Accumulating evidence suggests that HOX proteins, which require PBX1 as a cofactor, inhibit erythropoiesis (31)(32)(33). For example, mice transplanted with marrow cells overexpressing HOXA10 are anemic (33). Another HOXA gene, HOXA5, has also been shown to suppress erythroid differentiation when overexpressed in human CD34 ϩ CD38 Ϫ cells (34). Similar results were also seen with overexpression of HOXB6, which suppresses hemoglobinization in both cell lines and primary bone marrow cells (31). In . CTCF regulates HPIP expression in K562 cells. A, K562 cells were either treated or untreated with Epo (5 units/ml) for 4 h, and ChIP assay was performed using anti-CTCF antibody. IgG was used as control. CTCF occupancy over human HPIP promoter regions (HPIP-PR 1-4) was determined by ChIP assay. The bar values indicate the relative values of real time qPCR-amplified products using HPIP promoter-specific primers for the specified regions (HPIP-PR 1-4). B, K562 cells transfected with either control (ctrl) siRNA or CTCF siRNA were lysed, and the cell lysates were subjected to Western analysis using indicated antibodies. C, luciferase assay show the decrease in promoter activity upon CTCF knockdown in GATA1 and hHPIP-promoter luciferase (Luc) co-transfected K562 cells. D, co-immunoprecipitation shows the interaction of CTCF and GATA1 in K562 cells. K562 cell lysates were subjected to immunoprecipitation (IP) with either anti-CTCF antibody or control IgG followed by Western analysis using anti-GATA1 antibody followed by anti-CTCF antibody. E, nuclear co-localization of transiently transfected T7-tagged GATA1 with endogenous CTCF in HeLa cells. F, effect of DNA demethylation agent, decitabine (Dcbn), on HPIP expression. K562 cells were treated with DNA demethylation agent, decitabine, at indicated concentrations, and cell lysates were subjected to Western analysis using specified antibodies. GAPDH serves as internal loading control. G, effect of decitabine at various concentrations on HPIP promoter-Luc activity determined by luciferase assay. The data presented are representative of one of two independent experiments. FEBRUARY 17, 2012 • VOLUME 287 • NUMBER 8 addition, recipients of HOXB6-transduced marrow are anemic and have lower numbers of splenic BFU-E. Conversely, HOXB6-deficient mice have higher numbers of adult and fetal BFU-E (32) suggesting that HOX genes oppose erythropoiesis. Given the repressive effect of HPIP on PBX-HOX activity and its stimulatory effect on BFU-E, a key role for HPIP appears to be its ability to oppose HOX-PBX1-mediated inhibition of erythroid differentiation.
Role of HPIP in Erythroid Differentiation
Our data also implicate the PI3K/AKT pathway in HPIPmediated erythroid differentiation. The PI3K pathway is well connected to erythropoiesis. For example, activation of phosphatidylinositol 3-kinase is important for erythropoietin-induced erythropoiesis from CD34 ϩ hematopoietic progenitor cells (35). AKT1, which is one of the main downstream targets of PI3K, is reported to mediate erythropoiesis in response to Epo signaling by controlling GATA1 transcriptional activity on the TIMP-1 gene (19). Particularly, DMSO-induced erythroid differentiation is dependent on PI3K activity indicating the central role for the PI3K/AKT pathway in erythropoiesis (20). As shown in this study, LY294002, a selective inhibitor of PI3K, caused decreased numbers of CFC colonies in HPIP-overexpressing CD34 ϩ cells and blocked differentiation of K562 cells. In agreement with our previous studies in MCF7, HPIP expression in K562 also activated PI3K/AKT signaling as demonstrated by Western analysis (Fig. 4). The mechanism of PI3K/ AKT activation by HPIP may be based on the interaction of HPIP with PI3K as demonstrated previously in MCF7 breast cancer cell lines (13). However, it remains possible that HPIP could interact directly or indirectly with the EPO receptor through PI3K, which we have yet to analyze.
As HPIP expression is important for erythroid differentiation, we hypothesized that HPIP gene expression may be influenced by erythroid differentiation signals. Accordingly, in this report we also present evidence that HPIP gene expression is induced by Epo as well as by DMSO, a chemical inducer of erythroid differentiation, in K562 cells. Upon DMSO treatment for 5 days in K562 cells, we observed ϳ4-fold induction of HPIP mRNA (Fig. 5A), which correlates with increased HPIP protein levels. Similarly, DMSO also induces HPIP expression in HL60 (Fig. 5B). It is well documented that erythropoietin regulates the expression of many genes that are involved in erythropoiesis. For instance, c-MYB and NUM genes are induced in response to Epo treatment at day 8 in CD34 ϩ cells (36). Indeed GATA1 itself is induced in response to Epo signaling (37). Similarly, tumor suppressor transcription factor p73 is also induced in response to Epo signaling and is involved in erythroid differentiation (38). It is likely that GATA1 activates various genes, including HPIP, whose activity is required for cellular differentiation program.
GATA transcription factors, particularly GATA-1, GATA-2, and GATA-3, are important in regulating gene expression during hematopoiesis and in determining hematopoietic cell lineages. For instance, GATA-1 is an essential transcription factor for the development of erythroid cells (39,40). GATA-2 is critical in the development of hematopoietic stem cells (41). The HPIP promoter contains 16 predicted GATA-binding sites, and reporter assays revealed a strong stimulatory effect of GATA1 on HPIP gene transcription, ϳ130-fold activation. In G1E-ER4 cells, endogenous HPIP gene expression is induced upon GATA1 expression (Fig. 7). However, we also observed basal levels of HPIP expression in GATA1 knock-out cells (data not shown). This is similar to basal -globin expression in GATA1 knock-out cells (42). Other than GATA1, there may be other transcription factors that could activate HPIP expression in GATA1 knock-out cells. Indeed, our luciferase data show that other known E/Meg signature transcription factors, such as GATA2, C/EBP␣, PU1, SCL, Gif-1 and Fli-1, could also activate the HPIP gene promoter suggesting HPIP expression is not only dependent on GATA factors. Furthermore, our ChIP analysis also shows the direct recruitment of GATA1 onto the HPIP promoter, and upon Epo signaling GATA1 binding to the HPIP promoter was enhanced (Fig. 8B) consistent with a direct role for GATA1 on HPIP gene expression. This is consistent with the result from genome-wide analysis of GATA1-binding sites in G1E-ER4 cells carried out by Hardison and co-workers (43). As shown in supplemental Fig. S8, their data reveal that GATA1 and GATA2 occupy the HPIP promoter region. Because DMSO and Epo induce HPIP expression in K562 cells, enrichment of acetyl histone H3 lysine 4 upon treatment with either agent also reflects the active chromatin status of the HPIP promoter during erythroid differentiation (Fig. 8D). GATA1 regulates a number of genes involved in erythropoiesis, including its cofactor Fog-1 (42). Because GATA1 regulates HPIP gene expression, which in turn regulates the PI3K pathway for erythroid differentiation, this suggests a "feedforward" mechanism in HPIP-mediated erythroid differentiation.
Erythroid lineage commitment from progenitor cells requires the precisely coordinated activation of erythroid genes. It is dependent on the coordinated regulation of various erythroid transcription factors (44). In accordance with this, during erythroid differentiation HPIP transcription may be dependent on various transcription factors. In this context, we have also investigated the role of CTCF on HPIP gene expression for two reasons. One, CTCF-binding sites are mapped to the HPIP gene locus (supplemental Fig. S8) (45). Second, CTCF is reported to regulate erythroid differentiation (29,30). Our experimental results show that CTCF is required for Epo-induced HPIP gene expression (Fig. 9B). Furthermore, ChIP analysis shows the binding of CTCF to the HPIP promoter. Intriguingly, we also found that GATA1-mediated activation of the HPIP promoter requires CTCF (Fig. 9C) perhaps due to direct protein-protein interaction as evidenced by our co-immunoprecipitation results (Fig. 9D). CTCF binding to DNA is dependent on the methylation status of its target DNA (46). Intriguingly our studies show that decitabine, a DNA methylation inhibitor, treatment enhanced HPIP gene expression, consistent with increased recruitment of CTCF. CTCF-dependent HPIP gene expression, as shown here, partly explains how CTCF regulates erythroid differentiation in addition to its direct effect on -globin synthesis.
In conclusion, we propose a model wherein erythroid differentiation factor, Epo signaling, regulates HPIP expression through coordination between erythroid lineage-specific transcription factor GATA1 and CTCF. HPIP thus expressed in turn mediates erythroid differentiation through the activation of the PI3K/AKT signaling pathway. Epo influences HPIP gene expression through the GATA1 transcription factor, and HPIP in turn activates GATA1 through the PI3K pathway indicating a feedforward mechanism operating in erythroid differentiation. | 8,484 | 2011-12-20T00:00:00.000 | [
"Biology"
] |
The Role of DDB2 in Regulating Cell Survival and Apoptosis Following DNA Damage - A Mini-Review
Nucleotide excision repair (NER) represents a central cellular process for the removal of structurally and chemically diverse DNA lesions [Friedberg et al., 2006]. Mutations in genes involved in NER are associated with rare autosomal recessive syndromes such as xeroderma pigmentosum (XP), a condition characterized by sensitivity to UV light, neurological abnormalities, and a propensity to develop skin cancer (Cleaver, 2005). The observation that cells from XP subgroup E (XP-E cells XP2RO and XP3RO) are defective in recognizing damaged DNA and performing NER highlighted the physiological importance of the protein termed DNA damage-binding protein, or DDB [Chu & Chang, 1988]. The DDB protein, sometimes also referred to as UV-DDB due to its high affinity and specificity for UV-damaged DNA, contains two principal subunits, DDB1 and DDB2 [Grossman, 1976; Keeney et al., 1993; Takao et al., 1993]. The DDB protein complex also binds to non-UVdamaged DNA, like cisplatin-modified DNA, although with much lower affinity. Although the history of DDB spans more than two decades, the complete understanding of its physiological functions remains to be clarified. The activity of DDB has been repeatedly described in crude mammalian cell extracts by electrophoretic mobility shift assays or filterbinding assays performed by different laboratories since the first report of its discovery [Feldberg & Grossman, 1976]. Notably, micro-injections of DDB complexes into the nucleus of XP-E cells restored NER activity [Keeney et al., 1994], supporting the notion that DDB participates in chromatin NER. The DDB1 gene from simian cells was the first DDB gene to be identified [Takao et al., 1993]. The human DDB1 and DDB2 genes were subsequently sequenced [Dualan et al., 1995; Lee et al., 1995]. Soon after, DNA sequencing from Linn’s laboratory revealed that DDB2 is mutated in XP-E cells which lack DDB activity [Nichols et al., 1996; Tang & Chu, 2002]. The predicted DDB2 protein sequence was shown to contain several functional domains, including WD40 repeats, post-translation modification sites (e.g. acetylation, phosphorylation, and ubiquitination), DDB1and DNA-binding sites, as well as a DWD box. Notably, in a majority of XP-E cell lines, DDB2 was found to be altered at domains other than the one required for binding DNA. Thus, DDB appears to be regulated at several levels in UV-irradiated cells, including by transcriptional activation of DDB2 mRNA, post-translational modification, translocation to the nucleus, complex formation,
Fig. 1.Overall structure of the DDB1-DDB2-DNA complex.Ribbon representation of the DDB-DNA 6-4PP complex: DDB2; DDB1-BPA; DDB1-BPB; DDB1-BPC; DDB1-CTD.The DNA 6- 4PP damaged and undamaged DNA strands are depicted in black and gray, respectively.DNA binding is carried out exclusively by the DDB2 subunit via its WD40 domain.The DDB1 structure consists of three WD40 β-propeller domains (BPA, BPB, and BPC) and a Cterminal helical domain (CTD, shown at the center).DDB2 binds to an interface between the DDB1 propellers BPA and BPC, where its helix-loop-helix motif inserts into a cavity formed by the two propellers.The structures reveal the molecular mechanism underlying highaffinity recognition of UV lesions (damaged DNA strand) that are refractory to detection by XPC.The structures also suggest a mechanism for the assembly of the DDB-CUL4 ubiquitin ligase in chromatin and provide a framework for understanding the ubiquitination of proteins proximal to damage sites.[For detail, see Scrima et al., 2008].three founding cullins that are conserved from yeast to humans.A large number of E3 ubiquitin-protein ligase complexes are part of the DCX proteins (short for DDB1-CUL4-Xbox).Components of the CUL4-DDB-ROC1 (also known as CUL4-DDB-RBX1) include CUL4A or CUL4B, DDB1, DDB2, and RBX1 (Chen et al., 2001;Groisman et al., 2003).Other CUL4-DDB-ROC1 complexes may also exist in which DDB2 is replaced by a subunit that targets an alternative substrate.These targeting subunits are generally known as DCAF proteins (short for DDB1-and CUL4-associated factor) or CDW (short for CUL4-DDB1associated WD40-repeat; for reviews, see Lee & Zhou, 2007;Jackson & Xiong, 2009;Sugasawa, 2009).Many CUL4 complexes are involved in chromatin regulation and are frequently hijacked by viruses (reviewed by Jackson & Xiong, 2009).The DDB1-CUL4-ROC1 complex may ubiquitinate histones H2A, H3, and H4 at sites of UV-induced DNA damage (Wang et al., 2006;Kapetanaki et al., 2006;Guerrero-Santoro et al., 2008).The ubiquitination of histones may facilitate their removal from the nucleosome and promote assembly of NER components for subsequent DNA repair.Furthermore, the DDB1-CUL4-ROC1 complex ubiquitinates XPC and DDB2, which may enhance DNA binding by XPC and promote NER (El-Mahdy et al., 2006;Sugasawa et al., 2005).Structural analysis support the notion that CUL4 uses DDB1 as a large β-propeller protein and as a linker to interact with a subset of WD40 proteins like DDB2, which serves as substrate receptors, forming as many as 90 E3 complexes in mammals [Jackson & Xiong, 2009].Taken together, these results indicate that DDB complex is a component of the CUL4A-based ubiquitin ligase DDB1-CUL4A DDB2 , and that DDB2 may coordinate the ubiquitination of various proteins at DNA damage sites during GG-NER.In addition, CUL4B also binds to UV-damaged chromatin as a part of the DDB1-CUL4B DDB2 E3 ligase in the presence of functional DDB2.Nevertheless, CUL4B is localized in the nucleus and facilitates the transfer of DDB1 into the nucleus independently of DDB2 [Guerrero-Santoro et al., 2008].Notably, DDB1-CUL4B DDB2 is more efficient than DDB1-CUL4A DDB2 in mono-ubiquitinating histone H2A in vitro, suggesting that the DDB1-CUL4B DDB2 E3 ligase may have a distinctive function in modifying the chromatin structure at sites of UV lesions and promoting efficient GG-NER.Intriguingly, the CSA protein, a WD40 motif protein defective in a complementation group of Cockayne's syndrome, forms a similar E3 complex in place of DDB2 at damage sites during TC-NER.Although not detected in the DDB2 and CSA complex, CUL4B is highly expressed in mammalian cells, and the two CUL4 isoforms CUL4A and CUL4B appear to be redundant, at least for some cellular functions [Higa et al., 2003;Hu et al., 2004].
DDB2 inhibits apoptosis in cultured cell lines and Drosophila
Although the regulation of the DDB2 gene is complex, evidence on the biological function of DDB2 in response to apoptotic stimuli has accumulated.Evidence from biochemical experiments has shown how DDB2 interacts with proteins, DNAs, and RNAs.Most strikingly, structural studies using X-ray crystallography support the evidence of biochemical studies, as seen for example with GG-NER.Nevertheless, a complete understanding of the biological roles of DDB2 remains to be fully elucidated.To assess this question, we explored the role of DDB2 in regulating UV sensitivity in both human cells and Drosophila [Sun et al., 2010].As such, a full-length DDB2 open reading frame sequence was overexpressed in cells that express low or no DDB2.Conversely, DDB2 expression was suppressed in cells that endogenously express high levels of DDB2 by stable expression of full-length anti-sense cDNA.Using this strategy, we found that DDB2 displays a protective role against UV irradiation and cell surface death receptor signaling in both cisplatin-selected human HeLa cells and hamster V79 cells [Sun et al., 2002a;Sun et al., 2002b;Sun & Chao, 2005a].Furthermore, cFLIP expression was upregulated by DDB2 in a dose-and time-dependent manner in HeLa cells, a process associated with inhibition of apoptosis [Sun & Chao, 2005a].Inhibition of cFLIP by anti-sense oligonucleotides substantially inhibited apoptosis induced by UV irradiation and death receptor signaling in HeLa and other cell lines.Importantly, the protective effect of DDB2 was only detected in cells in which cFLIP is elicited during apoptotic stimuli.In contrast, DDB2 did not show a protective effect against apoptotic stimuli in human cell lines in which cFLIP expression was not induced [Sun et al., 2010].A transcription reporter assay also showed that DDB2 induces the transcription of cFLIP in a p38/MAPK-dependent manner [Sun & Chao, 2005b], suggesting that the DDB2/cFLIP pathway may be active in specific cell conditions [Figure 2].Surprisingly, overexpression of a DDB2 mutant (82TO) that does not significantly enhance DDB activity (Nichols et al., 1996), also protected HeLa cells from both UV-and Fas-induced cell death (Sun et al., 2002a;Sun & Chao, 2005a), suggesting that the protection effect of DDB2 may be independent of its DNA repair activity.Furthermore, ectopic expression of human DDB2 in Drosophila dramatically reduced UV-induced animal death compared to control GFP expression.On the other hand, expression of DDB2 in Drosophila failed to rescue a different type of apoptosis induced by the genes reaper or eiger [Sun et al., 2010].Depletion of DDB2 in HeLa cells did not affect apoptosis induced by cisplatin or mitomycin C (Sun et al., 2002a).In addition, overexpression or inhibition of DDB2 in HeLa cells only slightly affected cisplatin-induced caspase-8 signaling and apoptosis (Sun & Chao, 2005a), probably due to the observation that cisplatin primarily induces mitochondrial apoptotic signaling (Gonzalez et al., 2001).These observations suggest that the modulation of apoptosis by DDB2 may be unique.Cross-resistance to UV was found in cisplatin-selected cells, which overexpress DDB2 [Chu & Chang, 1990;Chao et al., 1991].DDB2 is a transcriptional partner of E2F1; however, the target of DDBs/E2F1 has not been identified (Hayes et al., 1998;Shiyanov et al., 1999).We found that the overexpression of DDB2 increases the expression of cFLIP at both the mRNA and protein levels in resistant cells in which DDB2 has been genetically suppressed [Sun and Chao, 2005a].E2F1 was also shown to regulate the expression of cFLIP (Stanelle et al., 2002).Therefore, cFLIP may represent the first potential target of DDB2/E2F1.E2F1 promotes TNF-induced apoptosis by stabilizing the TRAF2 protein (Phillips et al., 1999).However, the possibility that DDB2/E2F1 may co-activate cFLIP expression suggests a possible dual role for E2F1 in regulating cell survival and death.Additional overexpression of E2F1 does not increase endogenous cFLIP expression more than overexpression of DDB2 alone (Peng, 2008).Thus, the increased level of E2F1 observed in resistant cells is not enough to support the apoptotic resistance mediated by DDB2-cFLIP.Although induction of cFLIP by DDB2 is required to protect cells against UV-induced apoptosis, at least in HeLa cells, we could not exclude the possibility that other genes are also involved in mediating the anti-apoptotic effect of DDB2.
Ectopic expression of DDB2 induces apoptosis in DDB2-deficient cells
An extensive review of XP-E and DDB has been presented by Itoh who focused on XP-E and DDB2 as well as the classification of photosensitive diseases [Itoh, 2006].Surprisingly, XP-E cell strains proved to be abnormally resistant to UV irradiation and possessed reduced caspase-3 activity.Since the apoptotic defect in XP-E strains could be rescued by exogenous p53 expression, DDB2 was also proposed to regulate p53-mediated apoptotic pathway after UV irradiation in human primary cell strains [Itoh et al, 2000;2003].Cells from DDB2knockout mice also showed abnormal resistance and impaired p53 response to UV irradiation similar to human XP-E cell strains [Itoh et al., 2004].Furthermore, a recent study has demonstrated that mouse embryonic fibroblasts and human HeLa that express DDB2 shRNA are resistant to apoptosis induced by a variety of DNA-damaging agents despite the activation of p53 and other pro-apoptotic genes [Stoyanova et al., 2009].Also, these DDB2deficient cells are resistant to E2F1-induced apoptosis, probably due to the observation that these cells undergo p21Waf1/Cip1-associated cell cycle arrest following DNA damage.Notably, DDB2 targets p21Waf1/Cip1 for proteolysis and this process involves Mdm2 in a manner that is distinct from the p53-regulatory activity of Mdm2 [Stoyanova et al., 2009].These results suggest a new regulatory loop involving DDB2, Mdm2, and p21Waf1/Cip1 that is critical in determining the cellular fate between apoptosis and cell cycle arrest (for DNA repair) in response to DNA damage.The existence of this regulatory loop may be strengthened by showing that forced expression of DDB2 renders XP-E or DDB2-deficient cells sensitive to apoptotic stimuli.
Cancer-prone DDB2-deficient mice
DDB2-knockout mice have been shown to be prone to cancer formation [Itoh et al., 2004].Importantly, mice with single DDB2 allele knockout showed enhanced skin cancer following UV-B exposure, suggesting that DDB2 heterozygotes may be predisposed to skin cancer [Itoh et al., 2004].In addition, XP mouse models were reported to be prone to the formation of papillomas induced by 7,12-dimethylbenz[a]anthracene (DMBA) [de Bohr et al., 1999;Nakane et al., 1995;de Vries et al., 1995], a carcinogen that produces bulky DNA adducts usually repaired by the NER system.On the other hand, p53-knockout mice are prone to spontaneous tumors [Donehower et al., 1992;Jacks et al., 1994], but not to tumors induced by DMBA or 12-O-tetradecanoyl-phorbol-13-acetate (TPA) [Kemp et al.,1993].Taken together, these observations suggest that DDB2 may be involved in cancer formation through p53-mediated pathways.However, it is unclear whether re-introducing DDB2 in DDB2-knockout mice may prevent cancer formation.
Concluding remarks and future perspectives
The various results cited above suggest that the genetic integrity or gene expression status of the cells may be critical in determining the regulatory effects of DDB2 in response to apoptotic stimuli.The level of DDB2, p53, E2F1, and other proteins such as anti-apoptotic cFLIP and cell-cycle arrest p21, for instance, should be considered.The pro-apoptotic activity of p53 could vary between primary and cultured cell lines.For example, p53 activity in HeLa cells is hijacked by the human papillomavirus (HPV) E6 protein, a process that weakens apoptotic signaling in these cells.High levels of DDB2 may up-regulate and potentiate p53 activity by up-regulating apoptotic proteins in p53-normal cells.As such, HeLa cells, which harbor nearly null-p53 activity and additional anti-apoptotic cFLIP activity elicited by DDB2, may become resistant to apoptosis in response to cytotoxic DNA damage.These cellular responses are not surprising if the cultured cell lines were transformed by viruses or chemical means.Unfortunately, the cell lines used for the studies mentioned above are often treated this way.Furthermore, the expression of DDB2 isoforms, including the inhibitory D1 isoform, is often overlooked and the differential expression of such isoforms may dictate the cellular responses observed.Accordingly, alternative splicing of DDB2 transcripts and alteration of these genetic factors by other means in cell lines must be considered while evaluating the role of DDB2 in regulating apoptosis.In fact, there is no evidence so far that the apoptotic resistance of DDB2-defective XP-E, DDB2-knockout mouse cells, or DDB2-deficient human cells could be rescued by re-introducing DDB2 expression.In this sense, DDB2 is required to suppress apoptosis, but it does not suffice to be apoptotic.Furthermore, DDB2 as a proteasome component can target various proteins, such as p21 which is involved in cell cycle arrest, subsequently dysregulating cell cycle arrest during stress repair and leading to apoptosis.The cisplatin-selected HeLa cells used in our study do not display G1 arrest following mild, repairable DNA damage [Lin- Chao & Chao, 1994], which may explain the negligible, pro-apoptotic influence of DDB2 found by others [Stoyanova et al., 2009].Therefore, an updated model is proposed in Figure 3, in Fig. 3. Updated model for the regulation of DNA damage-induced apoptosis by DDB2.In this model, DNA damage applied to cells was mild and reached repairable level, leading to inhibition of apoptosis and cell cycle arrest for stress repair.The regulatory effect of DDB2 can be pro-apoptotic in cells experiencing mild DNA damage through p21 degradation which is targeted by DDB2.On the other hand, DDB2 can also be anti-apoptotic in cells harboring non-DNA damage apoptotic stimuli (e.g., death receptor) with up-regulation of anti-apoptotic cFLIP.Accordingly, the final outcome may be influenced by intrinsic mutations or extrinsic viral hijacking that can impair checkpoint for G1 arrest via p53 and p21.
which the regulatory effect of DDB2 can be either pro-apoptotic in cells that respond to mild DNA damage or anti-apoptotic in cells that respond to non-DNA damage apoptotic stimuli and that show up-regulation of the anti-apototic cFLIP.Notably, we found that human DDB2 may play a protective role against UV irradiation in the fruit fly Drosophila which does not express DDB2 as seen in the DDB2-defective cultured cell models.Therefore, the seemingly contrasting results mentioned above may be explained by our models, and primary cell cultures which are more representative of in vivo situations may represent a better choice for future studies of the biological functions of DDB2.
Fig. 2 .
Fig. 2. Model illustrating the role of DDB2 in regulating non-DNA damage-induced apoptosis.An anti-apoptotic effect is proposed for DDB2 against death ligand-or UVinduced stress through cFLIP up-regulation.DDB2 transactivation of cFLIP is required to enhance their apoptosis-inhibitory function.UV-or death receptor-induced apoptosis is attenuated by the up-regulated cFLIP; consequently, activation of initiator caspases (3 and 7), cleavage of protein substrates (PARP and DFF), and apoptosis are inhibited.DDB2 may also attenuate UV-induced apoptosis through repair of DNA damage.However, evidence from protective DDB2 mutants suggests possible alternative pathways.DL, death ligands; DR, death receptors.[Modified from Sunand Chao, 2005a] | 3,752 | 2011-11-07T00:00:00.000 | [
"Biology"
] |
Measurement of Transverse Spin Dependent Azimuthal Correlations of Charged Pion(s) in $p^{\uparrow} p$ Collisions at $\sqrt s = 200$ GeV at STAR
At the leading twist, the transversity distribution function, $h^{q}_{1}(x)$, where $x$ is the longitudinal momentum fraction of the proton carried by quark $q$, encodes the transverse spin structure of the nucleon. Extraction of it is difficult because of its chiral-odd nature. In transversely polarized proton-proton collisions ($p^\uparrow p$), $h_{1}^{q}(x)$ can be coupled with another chiral-odd partner, a spin-dependent fragmentation function (FF). The resulting asymmetries in hadron(s) azimuthal correlations directly probe $h_{1}^{q}(x)$. We report the measurement of correlation asymmetries for charged pion(s) in $p^\uparrow p$, through the Collins and the Interference FF channel.
Introduction
At the leading twist, the nucleon structure is fully described by three Parton Distribution Functions (PDFs): the unpolarized PDF, f 1 (x), the helicity PDF, g 1 (x), and the transversity PDF, h q 1 (x), where x is the nucleon momentum fraction carried by partons. Although, f 1 (x) and g 1 (x) are reasonably well constrained by experimental data [1,2], the knowledge of h q 1 (x) is limited to the semi inclusive deep inelastic scattering (SIDIS) and e + e − data [3]. This is because h q 1 (x) is a chiral-odd object and it needs to be coupled with another chiral-odd partner to form a chiral-even cross section that is experimentally observable.
In polarized proton-proton collisions (p ↑ p), h q 1 (x) can be coupled with chiral-odd spin-dependent fragmentation functions (FFs). Selecting inclusive charged hadrons within jets, collimated sprays of particles produced by fragmentation and hadronization of partons in high energy collisions, involves the Collins FF, whereas selecting oppositely charged di-hadron pairs in the final state involves the interference FF (IFF). In both channels, the coupling of h q 1 (x) with the respective FF results in experimentally measurable azimuthal correlation asymmetry, A U T , which is sensitive to h q 1 (x).
Experiment and Dataset
The Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) is capable of colliding bunched beams of polarized protons up to a center-of-mass energy ( s) of 510 GeV. The Solenoidal Tracker At RHIC (STAR) is one of the major experiments, where the Time Projection Chamber (TPC) is the main detector that provides particle tracking and identification in the midpseudorapidity region (−1 < η < 1) and over the whole 2π range in azimuthal angle [4]. The time-of-flight detector (TOF) [5], with a similar coverage as the TPC, improves the STAR's PID capability. The barrel electromagnetic calorimeter (BEMC) provides event triggering based on the energy deposited in its towers.
STAR firstly observed the IFF asymmetry based on 2006 p ↑ p data at s = 200 GeV [6], followed by the 2011 data at s = 500 GeV [7], and the Collins asymmetry based on 2011 data at s = 500 GeV [8].
Results
The Collins and the IFF asymmetries for charged pion(s) are extracted using the cross-ratio formula [9], where, N ↑(↓) is the number of π ± within jets (Collins channel) or exclusive π + π − pairs (IFF channel) when the beam polarization is ↑ (↓), in the respective detector halves, α and β. P is the average beam polarization. The azimuthal angle definitions and asymmetry extraction approach for the IFF and the Collins channels are based on the STAR publications [6] and [8], respectively. The mechanism of producing azimuthal correlations and its extraction from a theoretical point of view can be found in [10].
High-quality tracks are selected by applying several quality cuts and charged pions are identified by measuring their ionization energy loss, 〈d E/d x〉. For both channels, pions are selected by requiring a cut on the number of standard deviations of measured 〈d E/d x〉 from the expected pion energy loss, −1 < nσ π < 2. Furthermore, we find that the TOF enhances the particle identification (PID) in the momentum region where the TPC d E/d x between particle species overlaps. The Collins analysis utilizes both TPC and TOF information for PID in those regions, whereas the IFF analysis only makes use of the TPC. For both analyses, the average π ± purity reaches ∼90% in different kinematic regions. However, IFF analysis uses π + π − pairs, whose combined purity is ∼ 80%.
To estimate the trigger bias on the measurements, PYTHIA 6 [11] events are run through the STAR detector simulation implemented in GEANT 3 [12] and embedded into zero-bias events. The magnitude of the bias is determined by calculating the fraction of quark events at the detector level (GEANT) and at the particle level (PYTHIA) and taking a ratio between them. The effect of particle impurity and the trigger bias correction are the two main sources of systematic uncertainties.
as a function of η π + π − , integrated over M π + π − inv and p π + π − T (upper panel). The average x, fractional proton momentum carried by a quark, and z, fractional quark energy carried by the π + π − pair, are estimated from GEANT simulation in the corresponding η π + π − bins and shown in the bottom panel.
increases linearly with η π + π − in the forward region. The small asymmetry signal in the backward η π + π − region is mainly due to scattering from a quark at lower x, which is typically associated with the unpolarized beam. A strong correlation between the observed asymmetry and x can be seen, where x ranges from ∼ 0.1 to 0.22 from backward to forward η π + π − . However, z shows no clear dependence, the average of which is ∼ 0.46. The 2015 IFF results corroborate previous 2006 [6] and 2011 [7] results.
Preliminary results for the Collins asymmetry, A , as a function of particle jet p T are shown in figure 2. A significant positive asymmetry for π + and negative asymmetry for π − is observed in the follows a similar charge dependence in the x f < 0 region as well (lower panel). This charge-dependence is consistent with a theoretical calculation [15] and the Collins asymmetry in SIDIS [16]. Although the theoretical calculation undershoots data, they both follow a similar trend. This result shows a large asymmetry signal with higher statistical precision than previous STAR Collins analysis [8]. The Collins analysis is also performed for the identified kaon (K) and proton (p). It is found that the Collins asymmetry for K + is about the size of π + within the statistical uncertainties, while K − and p(p) asymmetries are consistent to zero.
Conclusion
STAR has measured charged pion(s) correlation asymmetries through the IFF channel based on 2015 and the Collins channel based on 2012+2015 p ↑ p data at s = 200 GeV. These datasets cover the Q 2 at the order of ∼ 100 GeV 2 at intermediate x, which is well within the valance quark region. The measured IFF asymmetry signal is enhanced around M π + π − inv ∼ 0.8 GeV/c 2 , which is consistent with the theoretical calculation and the previous STAR measurements. A large asymmetry in the forward η π + π − region corresponds to higher x, where quark transversity is expected to be sizeable, whereas the backward asymmetries are small since the probed low-x quarks are mainly from the unpolarized proton. The large Collins asymmetry, as a function of particle jet p T , is larger than the theory prediction in the x f > 0 region, but exhibits a similar trend, whereas the asymmetry in the x f < 0 region is small. The charge-dependence of π + (π − ) asymmetry is consistent with the Collins asymmetry found in SIDIS. The statistical precision of these results is largely improved with respect to previous STAR results. The systematic uncertainty includes the effect from the PID and trigger bias, which is well understood in the Collins analysis. However, the large systematic uncertainty in the IFF analysis is dominated by the PID effect, which will be reduced in the near future. These high percision IFF and Collins asymmetriey measurements will help to constrain the valance-quark transversity distributions and test the universality of the mechanism producing such asymmetries in different collision processes: SIDIS, e + e − , and p ↑ p. | 1,957.2 | 2021-07-31T00:00:00.000 | [
"Physics"
] |
Artificial Neural Networks in Coordinated Control of Multiple Hovercrafts with Unmodeled Terms
: In this paper, the problem of coordinated control of multiple hovercrafts is addressed. For a single hovercraft, by using the backstepping technique, a nonlinear controller is proposed, where Radial Basis Function Neural Networks (RBFNNs) are adopted to approximate unmodeled terms. Despite the application of RBFNNs, integral terms are introduced, improving the robustness of controller. As a result, global uniformly ultimate boundedness is achieved. Regarding the communication topology, two different directed graphs are chosen under the assumption that there are no delays when they communicate with each other. In order to testify the performance of the proposed strategy, simulation results are presented, showing that vehicles can move forward in a specific formation pattern and RBFNNs are able to approximate unmodeled terms.
Introduction
In recent years, Wireless Sensor Networks (WSNs) have attracted growing interests from researchers, because they have merits, compared with traditional networking solutions, such as reliability, flexibility, and an ease of deployment, that enable their use in a wide range of varied application scenarios [1].They can be applied to track moving objects, to monitor special areas so as to trigger alarm systems when some dangerous signals are detected, etc.As the eyes and ears of the IoT, WSNs can work as bridges to build connections between the real-world and the digital-world.In light of this promising application scenario, this paper mainly focuses on a case study of mobile WSNs, where a group of hovercrafts equipped with specific sensors are chosen as test platforms.The objective is to enable them to move around and interact with the physical environment [2] and thus execute a mission of mapping, searching, and monitoring in a specific area.
Coordinated control of a fleet of hovercrafts is challenging, especially when we take into account their complex dynamic models.Until now, for a single surface vehicle, many research results have been reported.For example, a linear fuzzy-PID controller was proposed in [3].Compared with the ordinary PID controller, the proposed controller therein performs better in term of improving settling time and reducing overshoot of the control signal.However, their works just consider the kinematic models of the vehicle without considering the dynamic models, which is not realistic in real operation scenarios.Another weakness of the linear controller is that it usually achieves local stability, e.g., [4], where velocity and position controllers were developed based on a linearized system, which is controllable only when the angular velocity is nonzero.Considering the limitations of linear controllers, in [5,6], nonlinear controllers for underactuated ships were designed, and global asymptotic stability is achieved.In [7], a nonlinear Lyapunov-based tracking controller was presented, and it was able to exponentially stabilize the position tracking error to a neighborhood of the origin that can be made arbitrarily small.A method of incorporating multiobjective controller selection into a closed-loop control system was presented in [8], where the authors designed three controllers so as to capture three "behaviors" representative of typical maneuvers that would be performed in a port environment.However, none of the works mentioned above consider disturbances and unmodeled terms of the vehicle.In order to ensure that the vehicle is robust to external disturbances, in [9], two controllers with application to a surface vehicle (named Qboat) and a hovercraft were proposed, and the authors designed disturbance estimators to estimate external constant disturbances.The disadvantage of this control strategy is that they did not consider unmodeled terms involved in the dynamic model of the vehicle.Considering this constraint, an estimator was developed in [10], where a fuzzy system was used to approximate unknown kinetics.A fault tolerant tracking controller was designed in [11] for a surface vessel.In addition, a self-constructing adaptive robust fuzzy neural control scheme for tracking surface vessels was proposed in [12], where simulation results were shown to testify the efficiency of the proposed method therein.
With respect to coordinated control strategy for multiple vehicles, many authors have presented their own approaches.In [13], a cooperative path following methodology was proposed under the assumption that the communication among a group of fully-actuated surface vehicles is undirected and continuous.A coordinated path following with a switching communication topology was designed in [14], while a null-space-based behavioral control technique was proposed in [15,16].In [17][18][19], a leader-follower control strategy was presented.In [20], an adaptive coordinate tracking control problem for a fleet of nonholonomic chained systems was discussed under the assumption that the desired trajectory is available only to part of the neighbors.The reader is also referred to [21] for more results about multi-vehicle control approaches.
Inspired and motived by those works mentioned above, in this paper, we first develop a controller that is able to drive a single hovercraft to the neighborhood of a desired smooth path, where a Radial Basis Function Neural Network (RBFNN) is applied to approximate unmodeled dynamic terms of the vehicle while integral error terms are introduced, thus improving the robust performance of the controller.It is relevant to point out that all elements of the estimation weight matrix are always bounded through the use of a smooth projection function.We also derive a consensus strategy to make sure the desire paths progress in a specific formation.In order to validate the effectiveness of the proposed strategy, simulation results are presented.
The rest of the paper is organized as follows: Section 2 presents robot modeling, graph theory, RBFNNs, and coordinated control problem.A single controller is proposed in Section 3, while Section 4 devises a consensus strategy.Simulation results are given in Section 5 to validate the performance of the proposed approach herein.At last, Section 6 summarizes our work and describes the future work.
Vehicle Modeling
We first define a global coordinate frame {U} and a body frame {B} as shown in Figure 1.The kinematic equations of the vehicle are written as where ṗ = [x, y] T denotes the coordinates of its center of mass, v = [u, v] T represents linear velocity, ψ is the orientation of the vehicle, and its angular velocity is represented by ω.Moreover, the rotation matrix R(ψ) is given by where S(ω) is a skew symmetric matrix, given by ) T , m and J denote the car's mass and rotational inertial, respectively.The force used to make the car move forward is denoted by T, and τ represents the torque that can steer the vehicle.Unmodeled dynamic terms are represented by ∆ v and ∆ ω .For more details about modeling surface vehicles, the reader is referred to [22].
Graph Theory
In this paper, G = G(V, E ) denotes a directed graph that can be used to model the interaction communication topology among mobile robots.The graph G consists of a finite set V = {1, 2, ..., n} of n vehicles and a finite set E of m pairs of vertices V ij = {i,j} ∈ E .If V ij belongs to E , then i and j are said to be adjacent.A graph from i to j is a sequence of distinct vertices starting with i and ending with j such that consecutive vertices are adjacent.In this case, V ij also represents a directional communication link from agent i to agent j.The adjacency matrix of the graph G is denoted by A = [a ij ] ∈ R n×n , which is a square matrix where a ij equals to one if {j, i} ∈ E and zero otherwise.Moreover, the Laplacian matrix L is defined as L = D − A, where the degree matrix D = [d ij ] ∈ R n×n of the graph G is a diagonal matrix and d ij equals the number of adjacent vertices of vertex i.
Radial Basis Function Neural Networks
Radial Basis Function Neural Networks (RBFNNs) can be used to approximate the unmodeled nonlinear dynamic terms due to their universal approximation capability [23].For any unknown smooth function f (x) : R n → R m can be approximated by RBFNNs in the following form, given by f (x) = W T σ(x) (7) where x ∈ Ω ⊂ R n , Ω is a compact set.The adjustable weight matrix with n neurons is denoted by W ∈ R n×m under the assumption that it is a bounded matrix, that is It is important to point out that here when we say matrix x ∈ R m×n is smaller than or equal to x max ∈ R m×n , we mean all elements of x are smaller than or equal to their corresponding elements of x max ∈ R m×n .Moreover, σ(x) is the basis function vector and . ., n denotes its component, µ i is the center of the receptive field, and c i represents the width of the Gaussian function.Moreover, it is relevant to point out that, in order to achieve better approximate results, we should make the neuron number n large enough and choose the parameters properly.Going back to the smooth function f (x) mentioned above, there is an ideal weight W d such that where (x) denotes the approximation error and satisfies || (x)|| ≤ max , where max is a positive number.It is noted that W d is an "artificial" quantity for the purpose of mathematic analysis, in the process of controller design, we need to estimate it [24].A simple RBFNNs is given by Figure 2.
Problem Statement
Now we can state our problem: Through designing a controller for each robot and proposing consensus strategies for their corresponding desired paths, we want a group of mobile robots to move forward in a specific formation pattern; that is, (1) for an individual vehicle, ||p − p d || → δ, where δ is an arbitrarily small constant value; (2) for a group of n desired paths, γ i − γ j → 0 and γi − γd → 0, where agent j is the neighbor of agent i, γd represents the desired value of γi , which is a known value.
Controller Design
Following the works of [7] and [9], we define the position error in the body frame as where R = R(ψ) denotes the rotation matrix.The time derivative of e 1 yields ė1 = −S(ω)e 1 + v − R T ṗd .(11) Define our first Lyapunov function as and compute its time derivative, we have where α 1 = k 1 e T 1 e 1 is positive definite, and k 1 denotes gain, which is a positive value.In order to continue to use the backstepping technique, a second error term is defined as and its integral term where η = [η 1 , η 2 ] T , η 1 = 0 is a constant vector.Define our second Lyapunov function as and its time derivative yields where α 2 = k 1 e T 1 e 1 + k 2 e T e 2 , which is positive definite, and k 2 is a positive number.It is relevant to point out that e 2n is introduced to eliminate the external slow-varying disturbances that act on the dynamic of the linear velocity v.Moreover, notice that we do not know ∆ v , thereby we use RBFNNs mentioned before to approximate it, given by where where Rewrite Equation ( 17), and we have where Now, we can define our third Lyapunov function as where W d1 = W d − W d1 denotes the estimation error, Γ 1 = diag(λ 11 , andλ 12 ) is a matrix, where λ 11 and λ 12 are positive values.Compute the time derivative of V 3 , and one obtains Therefore, we choose our desired input I d as and thereby our first controller T is chosen as Correspondingly, the desired angular velocity is where Notice that, if the updated law for ˙ W d1 is set as we cannot ensure that it is bounded by W max .To solve this problem, a projection operator, which is Lipschitz continuous [25], is applied in our case, which is given by where with the following condition: if ˙ˆ = proj(ρ, ˆ ) and ˆ (t 0 ) ≤ max , then Therefore, to make sure all elements of Ŵd1 are upper-bounded, the update law for ˙ W d1 is finally set as To keep using the backstepping technique, we define a new error term and its corresponding integral term Then, we define a new Lyapunov function as Compute its time derivatives, substitute Equation (29) into Equation ( 24), and combine the 4th property of projector ( ˜ proj(ρ, ˆ ) ≥ ˜ ρ), and one obtains where However, it is noted that both β and σ(x 1 ) contain unmodeled term ∆ v , so we need to separate ∆ v out from β and σ(x 1 ).After that, we can use RBFNNs to estimate it.Similar to ∆ ω , we also need to approximate it.This is given by, where Now we define our last Lyapunov function as where ), and Γ 3 = diag(λ 31 , λ 32 ) are positive definite gain matrices.Then, we compute the time derivative of V 5 as where and T .Then, we define our second control law, torque, as and estimate laws for ˙ W d2 and ˙ W d3 Substitute Equation ( 40)-(42) into Equation (37), one obtains where
Stability Analysis
Theorem 1.For a single mobile robot, by applying control laws, Equations ( 24) and (40), and updated laws, Equation (29), Equation (41), and Equation (42), for any large initial position, the robot will converge to the neighborhood of its corresponding desired path p d (γ), whose partial derivatives with respect to γ are all bounded.As a consequence, global uniformly ultimately boundedness is achieved.
Proof.Let's go back to Equation (43).Rewrite it, and we obtain where ] T , which is bounded due to the fact that || i (x)|| ≤ max , and the upper bound of ρ is Thereby, we can obtain that V5 is negative for ||X|| ≥ ||ρ max /k min ||, which can be made as small as possible by tuning the value of k min .As a result, the system is uniformly ultimate bounded, global uniformly ultimate boundedness is achieved.
Consensus Strategy
Building upon the work of [14], the proposed solution is given by γi where a 1 , a 2 , anda 3 are positive numbers, and v d denotes the desired value of γi .It is relevant to point out that z i can be viewed as an auxiliary state that helps n paths to reach consensus.
Proof.We first choose Laplacian matrix L, and define the coordinate error as where Λ = [γ i ] n×1 .Rewrite Equations ( 46) and (47), and one obtains where ) is a positive definite matrix.Define x = [Γ e , Z] T , and rewrite Equations ( 49) and ( 50), and we have where In order to ensure that Equation ( 51) is stable, we need to guarantee that all the eigenvalues of A have negative or zero real parts and all the Jordan blocks corresponding to eigenvalues with zero real parts are 1 × 1.We consider n agents, where For the sake of saving space, here we just present the eigenvalues of A directly, they are λ 1 , λ 2 , λ 3 , and λ 4 , and their corresponding multiplicities are 1, 1, n − 1, and n − 1, with Choosing a 1 , a 2 , anda 3 properly, we can guarantee that λ 2 , λ 3 , and λ 4 are negative-definite.Moreover, we also have that the Jordan block corresponding to λ 1 is 1 × 1.As a consequence, Equation (51) is stable [26].
To summarize, a fleet of n desired paths can progress in a specific formation, while each individual mobile robot converges to the neighborhood of its corresponding desired path.As a result, all those robots can move forward in a specific formation.
Simulation Results
In this section, we present simulation results with two different communication graphs including a cascade-directed communication graph (CDCG) and a parallel-directed communication graph (PDCG).Figures 3 and 4 show the sketches of the control blocks in Simulink/Matlab.
The Cascade-Directed Communication Graph
The CDCG used in this study is shown in Figure 5, where agent 1 can be viewed as the leader.Its corresponding Laplacian matrix L 1 is The desired paths are defined as where R i = 6 − i, (i = 1, 2, . . .,5) denotes the radius of the circle, and unmodeled terms ∆ V and ∆ ω are denoted by [0.1u 2 + 0.01uv, 0.01uv + 0.1v 2 ] T and 0.01uv + 0.05uω + 0.01vω, respectively.The parameters used herein are as follows: m = 0.6, J = 0.1, Figure 6a shows the actual trajectory of the mobile robots, and we can see that they move forward in a line formation.Moreover, Figures 7a,b display the convergence of ||e 1i || and ||e 2i ||, respectively, showing that all of them converge to the neighborhood of zero.The consensus are shown in Figures 8a,b, showing that γ ij = γ i − γ j converges to zero, where agent j is the neighbor of agent i.Moreover, we can also see that γi converges to the desired value γd = 0.5.It is important to point out that, in this work, we chosen agent 1 as our leader, and this satisfies γ1 = γd .The approximate performance of RBFNNs can be found in Figure 6b, where the blue lines denote the real values of ∆ v and ∆ ω , while the red lines represent their estimates ∆v and ∆ω .Thus, both estimates converge to their corresponding real values.
Parallel Communication Graph
The parallel communication graph is depicted in Figure 9.In this case, the Laplacian matrix L 2 is Moreover, it is noted that the graph presented in Figure 9 can be viewed as the combination of two cascade-directed graphs, where 1 → 2 → 4 and 1 → 3 → 5 and where agent 1 is the leader whose state is known.The desired paths are as follows: (1) 1st vehicle: (2) 2nd vehicle: (3) 3rd vehicle: (4) 4th vehicle: (5) 5th vehicle: where 10a displays the actual paths of the robots.
From Figure 11a,b, we can obtain that the norm of position errors and linear velocity errors converge to a ball centered at the origin.Moreover, Figure 12a,b show the performance of the consensus strategy introduced herein.It is interesting to remark that, by using the proposed consensus strategy-Equations ( 46) and (47)-a consensus is researched if and only if the graph a spanning tree [21].However, in our case, the root must be the leader whose states are known beforehand.
Conclusions
In this paper, we mainly focus on designing coordinated control algorithms for multiple agents, where a group of underactuated hovercrafts were chosen as test platforms.In order to testify the efficiency of the devised control strategy, we implemented it by using Simulink/Matlab.Moreover, it is necessary to point out that agents can also be mobile robots, unmanned air vehicles, etc.For a single vehicle, we used RBFNNs to approximate unmodeled terms and introduce integral terms, which can improve the robustness of the controller.For multiple vehicles, we consider directed topology under the assumption that the communication among vehicles are continuous.
With respect to our future works, we plan to (i) use deep neural networks to estimate unmodeled terms so as to enhance the performance of approximation, (ii) build a mathematical model for external disturbance, such as winds, waves, or currents, (iii) take into account time-delays when we develop communication strategy for vehicles, and (iv) propose collision-avoidance algorithms so that we can ensure the operation is safe.
Figure 1 .
Figure 1.Simple Model of The Vehicle.
Figure 4 .
Figure 4. Control block in Simulink/Matlab for the i-th vehicle.
Figure 6 .
Figure 6.Norm of the position errors and the performance of unmodeled term approximation (CDCG).(a) Norm of the position errors.(b) Performance of unmodeled term approximation.
Figure 7 .
Figure 7. Norm of the position and linear velocity errors (CDCG).(a) Norm of the position errors.(b) Norm of the linear velocity errors.
Figure 10 .
Figure 10.Norm of the position errors and performance of unmodeled term approximation (PDCG).(a) Norm of the position errors.(b) Performance of unmodeled term approximation.
Figure 11 .
Figure 11.Norm of the position and linear velocity errors (PDCG).(a) Norm of the position errors.(b) Norm of the linear velocity errors. | 4,727.8 | 2018-05-24T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Nanoscale light element identification using machine learning aided STEM-EDS
Light element identification is necessary in materials research to obtain detailed insight into various material properties. However, reported techniques, such as scanning transmission electron microscopy (STEM)-energy dispersive X-ray spectroscopy (EDS) have inadequate detection limits, which impairs identification. In this study, we achieved light element identification with nanoscale spatial resolution in a multi-component metal alloy through unsupervised machine learning algorithms of singular value decomposition (SVD) and independent component analysis (ICA). Improvement of the signal-to-noise ratio (SNR) in the STEM-EDS spectrum images was achieved by combining SVD and ICA, leading to the identification of a nanoscale N-depleted region that was not observed in as-measured STEM-EDS. Additionally, the formation of the nanoscale N-depleted region was validated using STEM–electron energy loss spectroscopy and multicomponent diffusional transformation simulation. The enhancement of SNR in STEM-EDS spectrum images by machine learning algorithms can provide an efficient, economical chemical analysis method to identify light elements at the nanoscale.
In a multi-component material, light elements determine the physical, chemical, mechanical, and electrical properties of the material; hence, alloying with light elements can be exploited for many applications. For example, the microstructure and phase stability in ferrous alloys are strongly dependent on the addition of a small amount of C and/or N (~ 1 wt%), which in turn dramatically changes their mechanical properties and corrosion resistance [1][2][3][4][5] . In addition, the distribution/concentration of light elements at the nanometer scale substantially affects the phase formation, which determines the performance of the material 6,7 . Therefore, analytical characterization techniques, strengthened by both a robust detection limit and nanometer spatial resolution, are required for researching and manufacturing materials with enhanced properties.
Analytical techniques such as scanning transmission electron microscopy (STEM)-electron energy loss spectroscopy (EELS) and 3D atom-probe tomography (3D-APT) have been widely used to characterize the chemical composition or a phase structure of materials due to their excellent detection limits (0.005-0.1 at% 8-10 and 0.001 at% 11,12 , respectively) and spatial resolutions (0.1 nm 13,14 and 0.2-0.4 nm [15][16][17][18] , respectively). In spite of these strengths, these techniques have some drawbacks. For example, large background EELS signals that stem from multiple scattering appear in the tails of the zero-loss peaks, resulting in a reduction in sensitivity 19 . Consequently, chemical composition results are substantially affected by the thickness of samples when using STEM-EELS. In addition, the wider usage of 3D-APT in nanoscale characterization is limited owing to the necessity of using small analytical volumes (~ 10 × 10 × 100 nm) 20,21 , the difficulty of sample preparation, and the production of local magnification artifacts caused by evaporation field-induced compositional variations 16,22 .
In contrast, STEM-energy dispersive X-ray spectroscopy (EDS) allows a detection limit as small as 0.05 wt% 23 with nanometer spatial resolution (< 2 nm) 24,25 and adequate efficiency of both time and cost for chemical quantification. However, the detection limits of light elements are insufficient, since less characteristic X-ray signals are generated by light elements owing to the lower number of orbiting electrons. This results in a smaller sample signal compared to the noise signal. This lower signal-to-noise ratio (SNR) restricts light element identification by Figure 1 shows SEM images of the microstructures of HNS specimens aged at 900 °C for 10 3 , 10 4 , and 10 5 s. Trigonal Cr 2 N precipitates [61][62][63] were observed in the micrographs as bright white regions at the grain boundaries and within the grain with a lamellar structure. In the specimen aged for 10 3 s (Fig. 1a), a cellular type of Cr 2 N began to form within the grains, and the volume fraction of cellular Cr 2 N increased with the aging time (Fig. 1b,c). The precipitate embryos grow by consuming other embryos or constituent elements around them, resulting in the formation of a region depleted of specific elements around the precipitate. However, the depletion zone of light elements such as N is not easily detected by conventional STEM-EDS technology because the SNR is too low. We attempted to overcome this detection limit by reducing the noise signals using unsupervised machine learning algorithms. First, the elemental distribution around the Cr 2 N precipitate was investigated using STEM-EDS. Then, the noise signals in the spectral images were reduced by combining several unsupervised machine learning algorithms. Finally, by comparing the noise-reduced STEM-EDS, EELS, and simulation results, the depletion zone of light elements was confirmed. Figure 2 shows the high-angle annular dark-field imaging (HAADF)-STEM and EDS mapping images of a typical precipitate in the HNS sample aged at 900 °C for 10 3 s. The precipitates had a cellular morphology and width of 100-150 nm. The EDS maps show that the main components of the precipitate and matrix were Cr and Fe, respectively (see Fig. 2b,c, respectively). There was minimal Fe in the precipitate, while Mn, N, and Mo were all present (Fig. 2d-f). The concentration of Fe atoms in the precipitate was less than 5% of that in the matrix (for details, see Fig. 3a), which suggests that the precipitate does not overlap with the matrix alloy, or is placed on a very thin matrix layer that can be considered negligible. Thus, the characteristic X-ray signals of Mn and Mo within the precipitate, as shown in Fig. 2e,f, respectively, do not result from the matrix alloy but from the precipitate itself. The concentration of Mn atoms within the precipitate was smaller than that within the matrix, while the concentration of Mo atoms was greater. The morphology and width of the precipitates and the respective distribution of each element (including each element's concentration) were similar to that in the samples aged for 10 4 and 10 5 s (see Supplementary Figs. S1 and S2 online).
For more quantitative analysis, the EDS concentrations of each element were profiled along the red lines in Fig. 3a 30 . Within the precipitate, the Cr and N concentrations were approximately 71 and 4 wt%, respectively, regardless of the aging time. However, the Cr and N concentrations around the precipitate, i.e., in the depletion region, were dependent on the aging time. In the sample aged for 10 3 s, a Cr-depleted zone was observed around the precipitate, with a minimum Cr concentration of 13 wt% (adjacent to the precipitate; see the left inset of the composition line profile in Fig. 3a for details), while no such reduction in Cr concentration was observed for the samples aged for 10 4 or 10 5 s (see the left insets in Fig. 3b,c, respectively). This difference is likely to result from the diffusion of Cr atoms from the matrix to the region around the precipitates in the samples aged for 10 4 and 10 5 s. Nevertheless, this does not explain the absence of an N-depleted region in the sample aged for 10 3 s (right inset in Fig. 3a), because the interstitial diffusion of N atoms is faster than that of the other substitutional elements. Considering the low concentration of N in HNS, it is conceivable that the flat concentration profile resulted from difficulties in distinguishing N signals and noise.
To investigate the presence of the N-depleted region around the Cr 2 N precipitate, the noise signals of the SIs were reduced using the SVD and ICA algorithms. The EDS maps were reconstructed using only a few principal components following decomposition using the SVD and ICA [64][65][66] and selected based on a knee-point detecting algorithm 67 (for details, see Supplementary Figs. S3-S5 online). Figure 4 shows the reconstructed EDS maps of the samples aged for 10 3 , 10 4 , and 10 5 s. The Cr, Fe, and Mn elemental maps do not differ much from the original maps. This suggests that the principal components selected based on the knee-point algorithm provide enough information to represent most of the variation of the characteristic X-ray signals, while also reproducing the elemental configuration. However, where the element had a relatively small concentration, such as N and Mo, the SNR of the elemental maps was considerably enhanced by the noise reduction process (for details, see Supplementary Fig. S6 online). The magnitude of characteristic X-ray signals from the majority elements (Cr, Fe, and Mn) is sufficiently higher than that of the noise signals; therefore, reduction of the noise signals has a negligible effect on the original spectral data. The opposite was observed for N and Mo, where the original SNR was much lower.
To confirm whether the remarkable SNR enhancement in the N and Mo spectral data would reveal the presence of the N-depleted region, we re-examined the compositional line profiles of the precipitate and surrounding area. In order to make a fair comparison with the line profiles in Fig. 3, the concentrations of each element were profiled at exactly the same position and width, as shown in The resulting compositional line profiles of all elements except N were equivalent to those from the original spectral images. However, an N-depleted region was clearly revealed in the sample aged for 10 3 s following noise reduction, as shown in the right inset in Fig. 5a. The width of the Cr-and N-depleted regions were almost identical at 70-100 nm, indicating that the diffusions of Cr and N atoms were considerably correlated. Additionally, the minimum Cr concentration in the depletion region was 13 wt% (adjacent to the precipitate), which coincides with the result obtained from the line profile in Fig. 3a, while the minimum N concentration was 0.01 wt% (adjacent to the precipitate). Compared to the Cr and N It is almost impossible to quantify the detection limit in EDS images because of the discontinuity of the characteristic X-ray signals; therefore, we evaluated the degree of enhancement for the detection limit of EDS images by calculating the SNR ( Table 1). The EDS mapping images of abundant elements like Cr, Fe, and Mn www.nature.com/scientificreports/ were slightly or negligibly enhanced, but those of sparse elements like Mo and N were drastically improved on the SNR (or the detection limit), with improvements of 470% and 44%, respectively. We performed EELS analysis of a Cr 2 N precipitate to validate the EDS results. The EELS elemental maps of Cr and N (Fig. 6a,b, respectively) provide information about the Cr-and N-depleted regions, with these regions being clearly recognized in the line profiles (see Fig. 6c and Supplementary Fig. S11 online). To compare the Cr and N EELS line profiles, the intensity of the N profile was adjusted to that of the Cr profile by multiplication with an appropriate value. Interestingly, the depletion regions of Cr and N coincided precisely, as shown in Fig. 6c. Both regions had a width of approximately 70-100 nm, which is the same as the width of the depletion regions obtained from the EDS results (Fig. 5a). This confirms that the noise reduction achieved by the proposed technique successfully increases the SNR without loss of information from the original signals. In general, the efficiency of EELS analysis in terms of both time and cost is inferior to that of EDS analysis. In addition, to eliminate the effect of sample thickness on the EELS signals, the plural scattering signals must be removed from raw EELS data, with the risk of distorting the spectra. From this perspective, EDS analysis with machine learning algorithms is more effective than EELS for detecting light elements.
To better understand the reason behind the similar widths of the N-and Cr-depleted regions, we explored the diffusional dynamics of each element, i.e., Fe, Cr, Mo, Mn, and N, using numerical simulations to solve the diffusion equation. The simulation results are summarized in Fig. 7. The austenite and Cr 2 N phases are both thermodynamically stable at 900 °C. Hence, the direction in which the interface moves is related to the equilibrium fractions of Cr 2 N and austenite. Figure 7b-f show the concentration profile changes for each element in the whole system at different times. The element concentrations change abruptly at the interface of the two phases. Fe, Cr, and N have relatively large concentration gradients at the submicron scale when the heat treatment is less than 10 3 s. This means that the probability of observing Cr-and N-depleted regions in the sample aged for 10 3 s is higher than that in the samples aged for longer than 10 3 s. Additionally, the simulation results coincided with the EDS and EELS experiments. It is important to note that, over time, the gradient of N is similar to that Table 1. Signal-to-noise ratios (SNRs) calculated for energy dispersive X-ray spectroscopy (EDS) elemental mapping images before and after noise reduction (NR). www.nature.com/scientificreports/ of Cr (Fig. 7c,f, respectively), which could be attributed to the chemical potential effect caused by the Cr concentration gradient in the matrix. This happens despite the diffusion coefficient of N being approximately five orders of magnitude higher than that of other substitutional elements. These diffusional dynamics induced by the chemical potential effect force the width of the N-depleted region to correspond with the Cr-depleted region. The compositional profile of alloying elements near the precipitate is essential for understanding the evolution of the precipitate. However, it is difficult to measure the profile of light elements such as N. Machine learning algorithms, such as SVD and ICA, can successfully reveal not only Cr deficiency but also N deficiency, which is regarded as the primary reason for degradation of various mechanical and corrosive properties, around the Cr 2 N precipitates that form in HNS. The physico-chemical properties of steel alloys depend on the distribution of precipitates. Therefore, an advanced analysis of the distribution of precipitates is important for the design of high-performance steels. The precise detection and analysis technique suggested in this study can be utilized in a comprehensive interpretation of the evolution kinetics of nanometer-sized precipitates containing light elements, and consequently can result in the design of an optimum thermal treatment process.
Conclusions
The combination of two unsupervised machine learning algorithms, i.e., SVD and ICA, successfully reduces the noise signals in EDS images and therefore increases the SNR of images. The N-depleted region around the Cr 2 N precipitate, which was concealed by noise signals in the original EDS data, was revealed using this technique. This is significant owing to the difficulties of noise separation and removal through normal signal processing methods. The Cr-and N-depleted regions were only observed in samples aged for 10 3 s when using our proposed method. The widths of the Cr-and N-depleted regions were equal, ranging from 70 to 100 nm. This consistency was validated using EELS. Simulations provided further evidence for the diffusional dynamics that explain how N, with lighter and faster diffusion, follows the depletion behavior of Cr. Both the simulation and EELS results support our method as a feasible and useful way of increasing the SNR in spectral images of different natures, including EDS and EELS. The work reported in this study can be viewed as a potential way of identifying light elements, such as N, from EDS experiments, in a more efficient way than that of EELS experiments. Other popular decomposition methods, such as non-negative matrix factorization (NMF), also provide the same results suggested in this work (see Supplementary Fig. S12 online). Thus, it is valuable to explore and compare different multivariate analysis algorithms for identifying light elements, which we will explore in future work.
Methods
Sample preparation. The HNS was a commercial P900NMo alloy (manufactured by VSG, Essen, Germany) with a composition of Fe bal. -17.94Cr-18.60Mn-2.09Mo-0.89N-0.04C (in wt%), which is a modified version of P900 (DIN 1.3816) with higher Mo and N concentrations. Specimens (12 × 10 × 4 mm) were cropped from the hot-rolled plate, encapsulated in an evacuated quartz tube, solution-treated at 1,150 °C for 30 min, and water-quenched. The resulting specimens were isothermally aged at 900 °C for 10 3 , 10 4 , and 10 5 s under Ar, followed by water-quenching. At this aging temperature, Cr 2 N formation is facilitated while the formation of other www.nature.com/scientificreports/ precipitates is retarded (e.g., σ phase) 50,61,68,69 . After isothermal aging, the microstructure for each specimen was analyzed using SEM (JSM-7100F, JEOL, Japan). For this analysis, the aged specimens were mechanically ground with SiC abrasive papers to 2,400 grit, mechanically polished using a diamond suspension with a particle size of 1 μm, and chemically etched in a glyceregia reagent (10 mL nitric acid, 20 mL hydrochloric acid, and 30 mL glycerin) at 25 ± 1 °C for 1-2 min followed by rinsing with water and drying in air.
Electron microscopy analysis.
To investigate the elemental configuration changes and aging time of the depletion region using STEM-EDS, samples with different aging times were prepared using a focused ion beam (FIB; Helios NanoLab 600, FEI, US) lift-off milling technique. The Cr 2 N precipitates were observed using TEM (Talos F200X, FEI, US) at an accelerating voltage of 200 kV (Schottky X-FEG gun) and equipped with a Super-X EDS system comprising four windowless silicon drift detectors (SDDs) in STEM mode with a probe current of ~ 0.7 nA. To guarantee a high enough SNR, the EDS mapping data was collected through a spectrum imaging form for 60 min with a 20 ms/pixel dwell time. This large dwell time also allows the Bremsstrahlung background subtraction based on a simple and widely used two-window method. The windows for each element (Cr, Fe, Mn, N, and Mo) are denoted in Supplementary Fig. S13 (online). After the background removal, we quantified the composition of each element in the HNS samples using this spectrum imaging data based on the conventional Cliff-Lorimer method with k-factors provided by the manufacturer (Bruker). EELS signals were obtained using a Quantum 966 (Gatan, USA) spectrometer attached to a Cs-corrected microscope (Titan 80-300, FEI, Netherlands), with an energy resolution of 0.8 eV for 0.01 eV/channel energy dispersion. The convergence semi-angle for the incident beam was 36 mrad, with an EELS collection semi-angle of ~ 50 mrad.
Noise reduction using machine learning algorithms. To reduce the noise signals in the STEM-EDS images, principal component analysis (PCA) and ICA, which are machine learning algorithms for dimensional reduction, were performed using the HyperSpy package 70 , written in Python. The noise-reduced EDS mapping images were obtained by the following three steps: (1) decomposition of the multivariate X-ray signals using the SVD algorithm; (2) independent component analysis; and (3) reconstruction of de-noised EDS maps. For the PCA, the spectral energy information of each pixel in the spectral images obtained by STEM-EDS was decomposed using the SVD technique. Spectral image with spatial dimensions of 1,024 × 1,024 and an energy dimension of 4,096 was decomposed by computing the SVD as follows: where M is a 1024 2 × 4,096 spectral image matrix, U is a 1024 2 × 1024 2 factor matrix vector, Σ is a diagonal 1024 2 × 4,096 eigenvalue matrix with non-negative values, and V T is the conjugate transpose with a 4,096 × 4,096 loading matrix vector. In terms of matrix factorization, the factor and loading matrix can be expressed as follows: Then, in view of eigenvalue decomposition, U and V, which are eigenvectors of MM T and M T M, respectively, can be calculated by solving the eigenvalue characteristic equations: where λ represents the eigenvalues and x the eigenvectors, which can be transformed to the U and V matrices. Consequently, the principal components were derived with ΣV T . Since the noise signal is subject to the Poisson distribution due to the uncertainty of electrons, the Poissonian noise normalization method was adapted into all of the decomposing processes. Then, the ICA, known as blind source separation [64][65][66] , was performed using the FastICA algorithm 71 embedded in the HyperSpy package to enhance the physical correlation between the principal components. As the factor matrix was derived from the SVD calculations, FastICA was used to find a maximum of the w T U non-Gaussianity, where w is the weight vector. To do this, an initial weight vector was randomly selected, and the vector matrix was recalculated until it converged, as shown in the following equation: where E{x} is the variance of the x matrix and g(x) is the derivative of the non-quadratic function. Finally, the independent components were obtained by multiplying w and U. The independent components with high eigenvalues that represent most variances were used for the reconstruction of the de-noised EDS mapping images. www.nature.com/scientificreports/ This was conducted using a PCA scree plot and a knee-point detecting approach 67 (for details, see the PCA scree plots, signals, and maps of the independent components in Supplementary Figs. S3-S5 online).
To evaluate the SNR of the spectral images, the coefficient reciprocal of the variation calculation method 72 was adopted. Briefly, for each element constituting the Cr 2 N precipitate, appropriate ranges of energy in the spectral images were summed. Then, given images containing intensities of the elemental signals, the SNR was calculated as follows: where µ is the expected value of the intensities of signals in the image, and σ is the standard deviation of the noise. This method has been widely used for SNR quantification in the field of image and signal processing 73-75 . Multicomponent diffusional transformation simulation. Cr 2 N precipitate growth was simulated using multicomponent diffusional transformation (DICTRA module, Version 2018a, Thermo-Calc. Software AB, Sweden) 76 software using thermodynamics (TCFE7.0) and mobility (MobFe2) databases [77][78][79] . This software obtains a numerical solution of the diffusion equation at the local equilibrium in the phase interface. Assuming there is no difference in the chemical potential at the interface between the matrix (austenite) and precipitate (Cr 2 N), the alloying element concentration at the interface can be evaluated from the thermodynamic equilibrium. The rate of phase transformation was controlled by the rate of the incoming or outgoing diffusional flux of elements. The software can simulate the growth process of the Cr 2 N precipitate in austenite assuming diffusioncontrolled growth by solving equations of thermodynamic phase equilibrium, flux balance, and diffusion. The conservation of mass leads to the following flux balance conditions at the moving interface between the austenite matrix and Cr 2 N precipitate: where V is the interface migration rate, C austenite k and C Cr 2 N k are the concentration of species k in austenite and Cr 2 N close to the interface, respectively, and J austenite k and J Cr 2 N k are the diffusion flux in austenite and Cr 2 N, respectively. These can be expressed according to Fick's first law of diffusion 77 : where n is the number of elements, D n kj is the diffusion coefficient of the matrix, and ∇C j is the concentration gradient for element j.
The growth of the Cr 2 N precipitate was simulated using the moving boundary model of the DICTRA software. It was assumed that the austenite and Cr 2 N phases are separated by a planar boundary, and that thermodynamic equilibrium exists locally at the interface. Initial conditions were set where 1 nm of Cr 2 N is bound by a 2 μm layer of austenite. The initial Cr 2 N composition was assumed to be the same as the thermodynamic equilibrium results at 900 °C. The austenite composition was set as Fe bal. -18Cr-18Mn-2Mo-0.9 N (wt%). The concentration was calculated for 20 uniform points within Cr 2 N and 200 uniform points within austenite. The transition of the interface and the concentration profiles at the interface were calculated for the sample aged at 900 °C for 10 4 s.
Data availability
The datasets generated during and/or analysed during the current study are not publicly available due to preparing another study and patent but are available from the corresponding author on reasonable request. | 5,518.8 | 2020-08-13T00:00:00.000 | [
"Physics"
] |
As(III, V) Uptake from Nanostructured Iron Oxides and Oxyhydroxides: The Complex Interplay between Sorbent Surface Chemistry and Arsenic Equilibria
Iron oxides/oxyhydroxides, namely maghemite, iron oxide-silica composite, akaganeite, and ferrihydrite, are studied for AsV and AsIII removal from water in the pH range 2–8. All sorbents were characterized for their structural, morphological, textural, and surface charge properties. The same experimental conditions for the batch tests permitted a direct comparison among the sorbents, particularly between the oxyhydroxides, known to be among the most promising As-removers but hardly compared in the literature. The tests revealed akaganeite to perform better in the whole pH range for AsV (max 89 mg g−1 at pH0 3) but to be also efficient toward AsIII (max 91 mg g−1 at pH0 3–8), for which the best sorbent was ferrihydrite (max 144 mg g−1 at pH0 8). Moreover, the study of the sorbents’ surface chemistry under contact with arsenic and arsenic-free solutions allowed the understanding of its role in the arsenic uptake through electrophoretic light scattering and pH measurements. Indeed, the sorbent’s ability to modify the starting pH was a crucial step in determining the removal of performances. The AsV initial concentration, contact time, ionic strength, and presence of competitors were also studied for akaganeite, the most promising remover, at pH0 3 and 8 to deepen the uptake mechanism.
Introduction
Arsenic pollution in surface and groundwater is a worldwide issue due to its natural abundance by dissolution from soils or anthropogenic activities [1]. The arsenic toxicity depends on its chemical nature, as inorganic arsenic compounds are more dangerous than organic ones. Moreover, factors, such as pH, redox potential, competitors, and microorganisms may affect the speciation, mobility, and bioavailability of arsenic [2]. In water, depending on pH, arsenic and arsenious acid and their deprotonated forms are present. In particular, H 3 AsO 4 appears up to pH 3, H 2 AsO 4 − is present in the pH range 1-8, HAsO 4 2− in the range 5-13, and, beyond pH 13, only AsO 4 3− exists. Concerning As III species, the neutral H 3 AsO 3 is the only species that goes up to pH 7, which is when H 2 AsO 3 − starts to generate, and then HAsO 3 2− begins at pH 10 and AsO 3 3− at pH 12 [3]. Generally, natural water is in the range of pH 4-8, while coal and acid drainage are more acidic (pH 2-4) [4].
Synthesis of the Sorbents
The akaganeite sample (Aka) was synthesized starting from a literature procedure but with some modification [36]. In a 100 mL borosilicate bottle with a polypropylene cap, 12.5 mL of 0.2 M EDTA was added to 28.5 mL of 5.26 M sodium hydroxide solution. To this solution, 25 mL of 2 M FeCl 3 ·6H 2 O solution was added (pH = 10) under vigorous stirring. The pH was adjusted to 2 with the addition of HCl 37% w/w, and the suspension was aged at 98 • C for 4 h in a laboratory oven. The bottle was then rapidly cooled in an ice bath. The solid was separated through centrifugation at 7000 rpm, washed several times with water, and then with ethanol until the chloride content was considered as structural (Cl/Fe = 0.11, estimated by STEM-EDX analysis) [37], then collected and dried under air at 55 • C for two days.
Ferrihydrite (Fer) was obtained by adding 180 mL of KOH 5 M to 100 mL 1 M Fe(NO 3 ) 3 solution [38]. The solid was recovered by centrifugation at 7000 rpm, washed several times with water until the removal of potassium ions (K/Fe = 0.003, estimated by STEM-EDX analysis), and dried at 40 • C in the oven for 48 h.
The maghemite sample (Mag) was prepared through the oxidation of magnetite in air. Magnetite was synthesized by adapting a co-precipitation method [38], dissolving 4.0590 g of FeCl 2 ·4H 2 O in 10 mL HCl 2 M to obtain a solution containing Fe II 2 M. This solution was added in a flask with 50 mL of a 1 M solution of Fe III , obtained by dissolving 20.6179 g of Fe(NO 3 ) 3 in HCl 2 M. Then, 500 mL of NH 3 1.4 M were added dropwise, using a burette, to the solution of Fe II and Fe III under stirring. The as-formed black precipitate was left to settle for 10 min and separated from the liquid solution using a magnet. The solid was finally washed four times with water and left to dry in an oven at 50 • C.
The silica-iron oxide composite (Comp) was prepared from porous silica adapting a method from the literature [39]. Briefly, at 35 • C, 4.6 g of pluronic 123 (P123) and 7.7 g of Na 2 SO 4 were dissolved in 135 g of 0.02 M acetic acid-sodium acetate buffer solution at pH = 5 for 16 h to form a homogeneous milky mixture under stirring. To this solution mixture, 10.24 mL of TMOS was added under stirring. After 5 min, the stirring was stopped. The resultant mixture was kept in a static condition for 24 h and then transferred into a Teflon-lined autoclave and heated to 100 • C for 24 h. The mixture was centrifuged, and the supernatant discarded. The solid was repeatedly washed with distilled water to remove the inorganic salts and then dried at room temperature. The final product was obtained by calcination under air at 550 • C for 5 h (heating rate 2 • C min −1 ) to remove the organic template. For the impregnation step, 1.0073 g of silica, dried at 120 • C overnight, was dispersed in 25 mL of ethanol and left to homogenize for 1 h under stirring in a crucible. To this mixture, 20 mL of an ethanolic solution of iron nitrate was added under stirring. The mixture was left under a fume hood until most of the ethanol evaporated and a dense paste remained. The crucible was then transferred to a pre-heated furnace at 400 • C for 3 h to decompose the iron nitrate. The final iron oxide content was 28.3% w/w.
Adsorption Tests
About 50 mg of sample were placed in a 50 mL centrifuge tube with 20 mL of arsenic solution at various concentrations. The solutions were prepared in volumetric flasks, using Milli-Q water, starting from Na 2 HAsO 4 ·7H 2 O as the source of As V or NaAsO 2 as the source of As III . The pH of the solutions was modified before contact with the solid sorbent material by adding 0.1 M or 1 M NaOH or HCl. 5 mL of this starting solution were diluted with 5 mL of 4% w/w HNO 3 for subsequent ICP-OES analysis, while 20 mL were put in the 50 mL centrifuge tubes containing the solid samples. After contact with the solid, the pH of the mixtures was measured again. The tubes were then put in an orbital shaker, rotating at 40 rpm for 16 h. After centrifugation at 8000 rpm for 10 min, the supernatant was separated and filtered with a 0.45 µm sieve. The pH of the solution was measured again, then 8 mL were transferred into a 15 mL testing tube together with 2 mL of nitric acid 10% w/w and analyzed by ICP-OES. Several parameters were modified, such as initial pH (pH 0 2-8), initial concentration (C 0 10-500 mg L −1 ), arsenic oxidation state (As III or As V ), contact time (10-960 min), ionic strength (NaCl 0.01-1 M), and competitors (SO 4 2− or PO 4 2− 1:100-1:1), as shown in Table S1.
Isotherm Models
The adsorbed amount of arsenic (q e ) was calculated through Equation (1) after correcting the sorbent mass for the water content, estimated via gravimetric analysis by heating the sample at 105 • C.
DR )
K DR = Dubinin-Radushkevich constant (mol 2 kJ −2 ) ε DR = Dubinin-Radushkevich variable (kJ mol −1 ) [44] In the Langmuir isotherm model (Equation (2)), q m is the maximum adsorption capacity (mg g −1 ), and K L is the Langmuir constant (L mg −1 ), which is related to the energy of adsorption. It assumes that each active site is equivalent, and it is energetically irrelevant whether adjacent sorption centers are empty or occupied.
In the Freundlich isotherm model (Equation (3)), K F is the Freundlich constant, which gives an estimation of the amount of sorbate retained per gram of adsorbent at the equilibrium concentration (mg 1−1/n L 1/n g −1 ), and n is a measure of the nature and strength of the sorption process and the distribution of active sites related to the surface heterogeneity (the heterogeneity of the system increases with n). Therefore, it assumes that the sorption process occurs on non-equivalent active sites due to repulsion between sorbent species.
It assumes that the heat of adsorption decreases linearly with the increase in the amount of adsorbed species.
The Redlich-Peterson isotherm model (Equation (5)) is a hybrid between the Langmuir and Freundlich models.
The model is used to differentiate between physisorption and chemisorption. The mean free energy of adsorption E ads (kJ mol −1 ) can be calculated following Equation (7).
Kinetic Models
The adsorbed amount of arsenic at a certain time (q t ) was calculated through Equation (8).
The plotted data q t vs. C t were then fitted by the pseudo-first order (Equation (9)) and pseudo-second order (Equation (10)) kinetic models (Table 2).
where K (min −1 ) and K" (g mg −1 min −1 ) are the pseudo-1st order and pseudo-2nd order constants, respectively. The pseudo 2nd order model in linearized form (Equation (11)) was then used to fit the t/q t vs. t plots.
Model Equation # Equation Parameters
Pseudo 1st-Order The kinetic data were fit also by the intraparticle diffusion model (Equation (12)).
Characterization Techniques
The solutions were analyzed by Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES) using an Agilent 5110 device (Agilent, Santa Clara, CA, USA). The calibration line was performed in the range 1-100 mg L −1 at wavelength 188.980 nm for arsenic. Each sample was analyzed three times in 2% w/w HNO 3 solution. The samples Fer, Mag, and Comp were characterized by powder X-ray Diffraction (XRD) using a PANalytical X'pert Pro (Malvern PANalytical, Malvern, UK) equipped with Cu Kα radiation (1.5418 Å). The sample Aka was analyzed through a Rigaku Smartlab diffractometer (Rigaku Corporation, Tokyo, Japan) equipped with a 9 kW rotating anode and graphite monochromator in the diffracted beam with Bragg-Brentano parafocusing geometry. The refinement of the structural parameters was performed by the Rietveld method using the MAUD software (v 2.991, Radiographema, Trento, Italy) [45] and LaB 6 from NIST as a reference standard for determining the instrumental parameters. The CIF structure used for the refinement were 0003079 from AMCSD for akageneite [46], 9011571 from COD for ferrihydrite [47], and 9006316 from COD for maghemite [48]. Room Temperature (RT) 57 Fe Mössbauer spectroscopy was done on a Wissel spectrometer (Wissenschaftliche Elektronik GmbH, Stamberg, Germany) using transmission arrangement and proportional detector LND-45431. An α-Fe foil was used as a standard, and the fitting procedure was done by the NORMOS program (v 25.1.1989, University of Duisburg-Essen, Duisburg, Germany). Transmission electron microscopy (TEM) images were obtained using a JEOL JEM 1400 Plus (Jeol Ltd., Tokyo, Japan) operating at 120 kV. The specimens were prepared by dropping an ethanol dispersion of the samples on a 200-mesh carbon-coated copper grid. High-Resolution TEM images were carried out through a JEOL JEM 2010 UHR (Jeol Ltd., Tokyo, Japan) operating at 200 kV equipped with a 794 slow-scan CCD camera. Zeta potential measurements were performed through a Malvern Instrument Zetasizer Nano ZSP (Malvern PANalytical, Malvern, UK) equipped with a He-Ne laser (λ = 633 nm, max. 5 mW) and operated at a scattering angle of 173 • , using Zetasizer software (v 7.03, Malvern PANalytical, Malver, UK) to analyze the data. The sample was prepared by suspending the composites (5 mg mL −1 ) in distilled water and adding HCl and NaOH to modify the pH from 2 to 9. The scattering cell temperature was fixed at 25 • C. Textural analyses of all samples were performed on a Micromeritics ASAP 2020 (Micromeritics, Norcross, Georgia, USA) by determining the nitrogen adsorption−desorption isotherms at −196 • C. Prior to analyses, the iron oxides and hydroxides samples were heated for 12 h under a vacuum at 120 • C (heating rate, 1 • C min −1 ), while treatment at 250 • C (heating rate, 1 • C min −1 ) for 12 h was applied on the bare silica and silica-composite sample. The specific surface area (S BET ) was computed by the Brunauer−Emmett−Teller (BET) equation [49] from the adsorption data in the P/P 0 range 0.05−0.30 for the mesoporous samples Aka, Mag, Silica, and Comp, while the Dubinin-Radushkevic model [44] was applied in the sample Fer, due to its own microporous nature. The total pore volume (V p ) was calculated at P/P 0 = 0.87. The pore diameter was determined by applying the Barrett−Joyner−Halenda (BJH) model [50] to the isotherm desorption branch for the mesoporous samples Aka, Mag, and Comp, while the Horvath-Kawazoe model [51] was adopted for the microporous Fer. FTIR spectra of the sorbents were acquired in a KBr pellet through a Bruker Equinox 55 spectrophotometer (Bruker, Billerica, MA, USA) in the region 400-4000 cm −1 . The spectra were processed with OPUS software (v 7.6, Bruker, Billerica, MA, USA). The sorbents, after arsenic uptake, were analyzed by means of an Agilent Cary 630 spectrophotometer (Agilent, Santa Clara, CA, USA) equipped with an ATR module in the range 650-4000 cm −1 . The spectra were processed with Microlab PC (v 5.5.1989, Agilent, Santa Clara, CA, USA).
Characterization of the Sorbents
The Fe III -based sorbents were prepared via easy and low-cost methods to obtain nanosized systems. XRD and RT 57 Fe Mössbauer spectroscopy (Figure 1a,c) show that all the iron oxide samples feature a single Fe III -based structure. Monoclinic I2/m akageneite is ascribed to Aka and cubic Fd3m maghemite for Mag. Fer sample displays the typical pattern of two-lines ferrihydrite, and it was fitted with the hexagonal P63mc phase. Comp, on the contrary, reveals a broadband at about 22 • , typical of amorphous silica, and the distinctive reflexes of two Fe III oxides, i.e., hematite and maghemite. All the RT Mössbauer spectra (Figure 1c) are characterized by isomer shift values in the range 0.32-0.38 mm s −1 , typical of Fe III -based phases (Table S2) [52][53][54][55][56][57][58]. The Aka, Fer, and Comp spectra feature one or more doublets, whereas the Mag spectrum can be fitted with two broad sextets, accounting for the distribution of hyperfine fields. The two sextets feature hyperfine field values of 47.09 (3) and 41.9 (4) T, corresponding to iron cations in the tetrahedral and octahedral sites of the spinel ferrite structure, respectively. The isomer shift for both the sextets is in the range 0.32-0.34 mm s −1 , typical for Fe III , indicating the effective oxidation of Fe II of magnetite from which it derived, whose values are around 0.6-0.7 mm s −1 . In the case of Aka, the spectrum was fitted with two doublets, as suggested in the literature [38], leading to isomer shift values of about 0.37 mm s −1 and quadrupole splitting of 0.536 (7) mm s −1 and 0.940 (9) mm s −1 , respectively. The spectrum of Fer can be fitted, based on a previous study [59], with three doublets corresponding to different non-equivalent iron positions in the ferrihydrite structure, as reported in Table S2. The Comp spectrum was fitted with a doublet with isomer shift equal to 0.34 (1) mm s −1 and quadrupole splitting of 0.74 (2) mm s −1 , similar to those obtained for similar systems of maghemite/hematite NPs impregnated in porous silica matrixes [60,61].
Characterization of the Sorbents
The Fe III -based sorbents were prepared via easy and low-cost methods to obtain nanosized systems.
XRD and RT 57 Fe Mössbauer spectroscopy (Figure 1a,c) show that all the iron oxide samples feature a single Fe III -based structure. Monoclinic I2/m akageneite is ascribed to Aka and cubic Fd3m maghemite for Mag. Fer sample displays the typical pattern of twolines ferrihydrite, and it was fitted with the hexagonal P63mc phase. Comp, on the contrary, reveals a broadband at about 22°, typical of amorphous silica, and the distinctive reflexes of two Fe III oxides, i.e., hematite and maghemite. All the RT Mössbauer spectra (Figure 1c) are characterized by isomer shift values in the range 0.32-0.38 mm s −1 , typical of Fe III -based phases (Table S2) [52][53][54][55][56][57][58]. The Aka, Fer, and Comp spectra feature one or more doublets, whereas the Mag spectrum can be fitted with two broad sextets, accounting for the distribution of hyperfine fields. The two sextets feature hyperfine field values of 47.09 (3) and 41.9 (4) T, corresponding to iron cations in the tetrahedral and octahedral sites of the spinel ferrite structure, respectively. The isomer shift for both the sextets is in the range 0.32-0.34 mm s −1 , typical for Fe III , indicating the effective oxidation of Fe II of magnetite from which it derived, whose values are around 0.6-0.7 mm s −1 . In the case of Aka, the spectrum was fitted with two doublets, as suggested in the literature [38], leading to isomer shift values of about 0.37 mm s −1 and quadrupole splitting of 0.536 (7) mm s −1 and 0.940 (9) mm s −1 , respectively. The spectrum of Fer can be fitted, based on a previous study [59], with three doublets corresponding to different non-equivalent iron positions in the ferrihydrite structure, as reported in Table S2. The Comp spectrum was fitted with a doublet with isomer shift equal to 0.34 (1) mm s −1 and quadrupole splitting of 0.74 (2) mm s −1 , similar to those obtained for similar systems of maghemite/hematite NPs impregnated in porous silica matrixes [60,61]. Figure 1. Wide-angle XRD patterns and position of the theoretical XRD diffraction peaks from PDF cards (a), small-angle XRD (b), 57 Fe Mössbauer spectra (c), FTIR spectra (d), N 2 −physisorption isotherms (e), and corresponding pore size distribution (f) of the samples. The small-angle X-ray patterns of the silica-based samples (Figure 1b) show the presence of a shoulder at about 1.5 • , which indicates the presence of an ordered porous structure in the mesoporous range [60,61].
FTIR spectra of the samples (Figure 1d) reveal the typical bands of iron oxides and oxyhydroxides (Table S3), besides those related to water. In particular, Aka shows two Fe-O vibrational modes at 680 and 470 cm −1 [24,29,62]. The band at 570 cm −1 and shoulders at 820 and 630 cm −1 in the sample Mag are a clear indication of the presence of maghemite, in agreement with 57 Fe Mössbauer data [38,55,63]. For the sample Fer, the Fe-O band is placed at about 600 cm −1 , while the bands at 1500 and 1330 cm −1 are related to the Fe-OH stretching modes [38]. The sample Comp discloses the bands associated with silica (Si-O-Si stretching modes at 1220, 1090, and 465 cm −1 , and Si-OH stretching mode at 810 cm −1 ), whereas those related to the iron oxide phase are difficult to be detected probably because of its form as nanocomposite [60,61].
The textural properties of the sorbents were studied through N 2 physisorption (Figure 1e, Figures S1 and S2, Table 3). Aka, Mag, and Comp present an IV-type isotherm with an H1 hysteresis loop characteristic for mesoporous materials. On the contrary, Fer features an I-type isotherm with a H3 hysteresis loop typical of microporous materials. As expected, the largest surface area (410 m 2 g −1 ) is observed for Comp, followed by Fer, Aka, and Mag (from 92 to 260 m 2 g −1 ). The pore volumes are instead higher for the mesoporous materials (Comp > Aka > Mag) and lower for Fer due to the presence of only micropores. The pore size distributions (PSD) of Mag and Aka ( Figure 1f) are centered at about 11.8 and 9.4 nm, respectively, while sharper PSD is observed for Comp due to the mesostructured nature of the silica matrix. For Fer, the micropore distribution, obtained using the Horvath-Kawazoe model, showed a maximum at about 0.7 nm. The comparison between Comp and the bare silica matrix ( Figure S1), reveals a decrease of 10% of surface area and 11% of pore volume in the first one, as expected after the impregnation process. The PSD is instead centered, for both samples, at about 8 nm, suggesting the formation of isolated NPs inside the pores in spite of a uniform layer [39,60,61]. Table 3. Structural parameters of the sorbents extracted from the Rietveld refinement of XRD patterns. In the case of anisotropic Aka, the anisotropic-no-rules model was employed. Morphological parameters calculated from TEM micrographs. Textural parameters calculated from N 2 -physisorption experiments. V P for the Fer sample was calculated by the Horvat Kawazoe model, while a BJH model was adopted for the other samples. a, b, and c: cell parameters; D XRD 1 and D XRD 2 : crystallite sizes; D TEM 1 and D TEM 2 : particle sizes; S BET : surface area; V p : pore volume; D p : pore diameter.
The differences in the surface areas and the pore volumes observed among the iron oxide/oxyhydroxides are mainly due to the morphological properties of the samples in terms of the size and shape of NPs. For this reason, TEM analyses were conducted on all samples and are shown in The Rietveld refinements of the XRD patterns ( Figure S5) were performed on the basis of the information extracted from TEM analysis. The cell parameters, the crystallite sizes, and the relative fraction of the phases (for Comp and Aka) were determined ( Table 3). The XRD pattern of Aka was refined by using two populations of akageneite particles: one referring to isotropic particles and one to those with anisotropic shape, for which isotropic and anisotropic-no-rules models [64] were used, respectively. A diameter for the isotropic model of 5.3 (1) nm was found, while, for the anisotropic one, a minimum dimension (DXRD1) of 2.0 (6) nm and a maximum one (DXRD2) of 25.1 (2) nm were obtained, corresponding to the D11 and D22 textural components, respectively. The lower crystallite size values, in comparison with those obtained for the particles by TEM, are probably derived from the presence of NPs made up of at least two crystallites close to each other. For Mag and Fer, an isotropic model was employed since it gave satisfactory outcomes, resulting in crystallite sizes of 14.0 (1) nm and 1.7 (3) nm, respectively, in good agreement with the TEM observations. Comp was found to be composed of 18% w/w of hematite and 82% w/w of maghemite, both featuring crystallite sizes between 7 and 9 nm, compatible with the mesopore size of the matrix.
In view of possible applications as adsorbents for ionic species from polluted water, the evaluation of the surface charge of the samples at different pH is crucial. Therefore, the zeta (ζ) potential measurements on all samples ( Figure 3) were performed. Comp Figure S3). Moreover, as can be seen in Figure 2a and Figure S4, some small NPs of about 3 (1) nm are visible.
The Rietveld refinements of the XRD patterns ( Figure S5) were performed on the basis of the information extracted from TEM analysis. The cell parameters, the crystallite sizes, and the relative fraction of the phases (for Comp and Aka) were determined ( Table 3). The XRD pattern of Aka was refined by using two populations of akageneite particles: one referring to isotropic particles and one to those with anisotropic shape, for which isotropic and anisotropic-no-rules models [64] were used, respectively. A diameter for the isotropic model of 5.3 (1) nm was found, while, for the anisotropic one, a minimum dimension (D XRD1 ) of 2.0 (6) nm and a maximum one (D XRD2 ) of 25.1 (2) nm were obtained, corresponding to the D 11 and D 22 textural components, respectively. The lower crystallite size values, in comparison with those obtained for the particles by TEM, are probably derived from the presence of NPs made up of at least two crystallites close to each other. For Mag and Fer, an isotropic model was employed since it gave satisfactory outcomes, resulting in crystallite sizes of 14.0 (1) nm and 1.7 (3) nm, respectively, in good agreement with the TEM observations. Comp was found to be composed of 18% w/w of hematite and 82% w/w of maghemite, both featuring crystallite sizes between 7 and 9 nm, compatible with the mesopore size of the matrix.
In view of possible applications as adsorbents for ionic species from polluted water, the evaluation of the surface charge of the samples at different pH is crucial. Therefore, the zeta (ζ) potential measurements on all samples ( Figure 3) were performed. Comp features the lowest surface charge at acidic pH (≈5 mV) and the lowest isoelectric point (pI ≈ 4.5), mainly due to the high amount of silica, which features a low surface charge [65]. On the contrary, Mag presents higher ζ-potential values (>20 mV) up to pH 5 and pI ≈ 7. A higher isoelectric point is observed for Fer (pI ≈ 8.5) and Aka (pI ≈ 10), together with higher ζ-potential values when the surface is positively charged up to pH 7 (30-40 mV). features the lowest surface charge at acidic pH (≈5 mV) and the lowest isoelectric point (pI ≈ 4.5), mainly due to the high amount of silica, which features a low surface charge [65]. On the contrary, Mag presents higher ζ-potential values (>20 mV) up to pH 5 and pI ≈ 7. A higher isoelectric point is observed for Fer (pI ≈ 8.5) and Aka (pI ≈ 10), together with higher ζ-potential values when the surface is positively charged up to pH 7 (30-40 mV).
Effect of the pH in the As V and As III Test Removal by Fe III -Based Sorbents
To estimate the optimal pH value for the adsorption, the first experiment focused on the pH dependence of the As V and As III adsorption capacity of the adsorbents. Indeed, this process depends on the arsenic species present in the solution, as can be seen in the Bjerrum plot in Figure S6, and on the surface species and charge of the different sorbents as a function of the pH (Figures 3 and S7), therefore several reactions are possible ( Figure S8).
For As V , Aka is the most efficient one, with a removal capacity close to 100% in the whole pH0 range. At pH0 2 and 3, Fer also features high arsenic uptake (100% and 94%, respectively), but its efficiency drops to 56% at pH0 4 and 50% at pH0 6, finally reaching 23% at pH0 8. A similar behavior, but with a more gradual worsening and lower performance, is observed for Mag (pH0 2: As V removal = 68.5%; pH0 8: As V removal = 16.2%) and Comp (pH0 2: As V removal = 26.1%; pH0 8: As V removal = 6.1%). The pH measure of the arsenic solution before contact with the sorbents (pH0), immediately after the contact (pHInt), and after the batch tests (pHFin) reveals interesting information about the adsorption process ( Figure S9). For Aka, a decrease in pH is observed, more consistent as pH0 increases, while for Fer and Mag, an opposite trend can be depicted, with an increase of pH immediately after the contact of the solid with the solution, in particular at pH0 4. For Comp, similar behavior is observed with the exception of pH0 6 and 8, at which a decrease in the pH is observed. To discern whether the arsenic species or the sorbents themselves were the cause of the pH modification, all the sorbents were put in contact with water (pH ≈ 5.5), and the pH was measured immediately after. As can be seen in Figure S10, the pH of the solution containing Aka decreased to 3.09, probably due to the diffusion of Cl − and H + from the akaganeite channels toward the solution [30,66]. Other authors reported an opposite trend with an increase in the arsenic solution pH from 3.5 to 6, due to the contact with the akageneite, but, unfortunately, no explanation was provided [24]. On the contrary, the other sorbents did not cause drastic pH changes. Indeed, only a slight increase
Effect of the pH in the As V and As III Test Removal by Fe III -Based Sorbents
To estimate the optimal pH value for the adsorption, the first experiment focused on the pH dependence of the As V and As III adsorption capacity of the adsorbents. Indeed, this process depends on the arsenic species present in the solution, as can be seen in the Bjerrum plot in Figure S6, and on the surface species and charge of the different sorbents as a function of the pH (Figure 3 and Figure S7), therefore several reactions are possible ( Figure S8).
For As V , Aka is the most efficient one, with a removal capacity close to 100% in the whole pH 0 range. At pH 0 2 and 3, Fer also features high arsenic uptake (100% and 94%, respectively), but its efficiency drops to 56% at pH 0 4 and 50% at pH 0 6, finally reaching 23% at pH 0 8. A similar behavior, but with a more gradual worsening and lower performance, is observed for Mag (pH 0 2: As V removal = 68.5%; pH 0 8: As V removal = 16.2%) and Comp (pH 0 2: As V removal = 26.1%; pH 0 8: As V removal = 6.1%). The pH measure of the arsenic solution before contact with the sorbents (pH 0 ), immediately after the contact (pH Int ), and after the batch tests (pH Fin ) reveals interesting information about the adsorption process ( Figure S9). For Aka, a decrease in pH is observed, more consistent as pH 0 increases, while for Fer and Mag, an opposite trend can be depicted, with an increase of pH immediately after the contact of the solid with the solution, in particular at pH 0 4. For Comp, similar behavior is observed with the exception of pH 0 6 and 8, at which a decrease in the pH is observed. To discern whether the arsenic species or the sorbents themselves were the cause of the pH modification, all the sorbents were put in contact with water (pH ≈ 5.5), and the pH was measured immediately after. As can be seen in Figure S10, the pH of the solution containing Aka decreased to 3.09, probably due to the diffusion of Cl − and H + from the akaganeite channels toward the solution [30,66]. Other authors reported an opposite trend with an increase in the arsenic solution pH from 3.5 to 6, due to the contact with the akageneite, but, unfortunately, no explanation was provided [24]. On the contrary, the other sorbents did not cause drastic pH changes. Indeed, only a slight increase was observed for Fer (pH 6.08) due to protonation of the surface hydroxyl groups by water molecules to form ≡FeOH 2 + and consequent release of OH − ( Figure S7) [38]. On the contrary, Mag and Comp displayed a pH decrease (4.95 and 5.21, respectively), caused by the Lewis acid behavior of unsaturated surface Fe atoms in the first case [38], and the formation of ≡SiO − in the latter one, both accompanied by a release of H 3 O + ( Figure S7) [67]. was observed for Fer (pH 6.08) due to protonation of the surface hydroxyl groups by water molecules to form ≡FeOH2 + and consequent release of OH − ( Figure S7) [38]. On the contrary, Mag and Comp displayed a pH decrease (4.95 and 5.21, respectively), caused by the Lewis acid behavior of unsaturated surface Fe atoms in the first case [38], and the formation of ≡SiO − in the latter one, both accompanied by a release of H3O + ( Figure S7) [67]. Tables S4 and S5.
Considering the different behaviors of the sorbents in modifying the pH, the iron oxide and oxyhydroxides were put in contact with different As-free solutions to study the evolution of the pH (pHInt) and the zeta potential ( Figure S10). The solutions were prepared by adding HCl or NaOH to Milli-Q water to obtain the pH0 2, 3, 4, 6, and 8. Also in this case, Aka caused a pH reduction in the whole range, accompanied by a high and positive zeta potential (36-43 mV Considering the different behaviors of the sorbents in modifying the pH, the iron oxide and oxyhydroxides were put in contact with different As-free solutions to study the evolution of the pH (pH Int ) and the zeta potential ( Figure S10). The solutions were prepared by adding HCl or NaOH to Milli-Q water to obtain the pH 0 2, 3, 4, 6, and 8. Also in this case, Aka caused a pH reduction in the whole range, accompanied by a high and positive zeta potential (36-43 mV). Mag did not induce any substantial pH modification up to pH 0 4, but at pH 0 6 and 8, a pH reduction to 5.30 and 6.11, respectively, was observed, with zeta potential values in the range 30-21 mV. Concerning Fer, at pH 0 3, 4, and 6, an increase of pH to 5.58, 7.06, and 7.33, respectively, was depicted. In this case, contrary to what was observed in Figure 3, the zeta potential fell down to 0 mV, already at pH Int 7.06 and assumed negative values at pH Int 7.33 (−33 mV). This discrepancy can be ascribed to the lower ionic strength in this latter experiment that does not permit the formation of an electric double layer [38]. Indeed, in the tests reported in Figure 3, the sorbent dispersion pH was modified firstly with HCl down to pH 2, then increased with NaOH. The higher amount of Na + adsorbed on the sorbent surface is permitted to have higher zeta potential values, confirming the role of the adsorbed ions in the sorbent properties and behavior.
The above discussion permits us to understand better the role of sorbents in arsenate removal. Indeed, the pH reduction for Aka, observed during the As V uptake tests, is caused not by the removal of arsenate but by the sorbent itself. In the case of Fer, Mag, and Comp, the change in the pH during the As V batch tests, is caused both by the surface chemistry of the sorbent and the arsenate solution equilibria. In fact, the increase in the pH, starting from pH 0 3, is due to the protonation of the surface hydroxyl groups, but also involves the removal of arsenic species, as evidenced by the differences between pH Int and pH Fin . On the contrary, the decrease in the pH for Comp at pH 0 6 and 8 can be mainly ascribed to the deprotonation of the silica surface, since only a low amount of arsenate species is removed. Therefore, it is worth noting that the pH, in order to estimate the surface charge and the arsenic speciation, is derived from the contact of the arsenic solution with the sorbent (pH int in Table S4), which in many cases differs from the initial pH value (pH 0 in Table S4). In this optic, the decrease in the As V uptake with the increase in pH int agrees with the observed trends of the ζ-potential: a positive charge is found at acidic pH based on dominant ≡FeOH 2 + species, and a negative charge is found at basic pH due to a majority of superficial ≡FeO − . This determines a different extent of interaction between the sorbent surface and the arsenate anions as a function of the pH [38]. The higher efficiency of Aka, featuring 100% of As V removal in the whole pH 0 range, can be explained considering the pH Int instead of pH 0 since the sorbent itself drops it down to more acidic pH, where the oxyhydroxide is positively charged and works better ( Figure S9). For Fer, only at pH 0 2 and 3, pH Int remains acid, while for the other pH 0 values, neutrality or basicity was observed after the sorbent-As V solution contact. The decrease in As V uptake at pH 0 8 can be easily explained considering the negative charge of the Fer surface (Figure 3). At pH 0 4 and 6, corresponding to pH Int 6.5-6.8, we must consider that besides H 2 AsO 4 − , HAsO 4 2− is also already present in the solution ( Figure S6), whose adsorption on the oxyhydroxide surface is less favored due to the release of worse leaving groups than those for H 2 AsO 4 − ( Figure S8, reactions +2A/+2B vs. +3A/+3B). A comparison between Fer and Mag reveals that pH Fin is always higher for the first sorbent, with a different trend with respect to pH Int . Since the two oxyhydroxides featured the best performances at 100 mg L −1 toward As V removal, the tests were repeated employing 500 mg L −1 as the starting concentration (Table S4, Figure 4b, Figures S11 and S12), to evaluate the pH-dependence in sorbent saturation condition. It is worth noting that, for Aka, pH int was found to be quite close to pH 0 , due to the high concentration of arsenate species, which act as a buffer solution. In this case, as for the other sorbents, it is possible to observe a decrease in As V uptake with increasing the pH 0 , with adsorption capacity equal to 87 mg g −1 at pH 0 2 and 51 mg g −1 at pH 0 8. However, this decrease is not gradual, with almost constant values observed between pH 0 2 and 4 and a higher worsening of the performances at pH 0 6 and 8. If the surface charge is considered, one should expect a constant behavior of up to pH 0 7, while we observed a drop already at pH 0 6, as explained above due to the presence of HAsO 4 2− [15]. Moreover, at basic pH, the OH − present in solution competes with the negatively charged arsenate species ( Figure S6) [25], lowering the As uptake. For Fer, the trend of pH Int , pH Fin , and As V removal, is similar to what was observed at 100 mg L −1 . The results revealed a maximum adsorption capacity reached at pH 0 2 equal to 71 mg g −1 , lower than that of Aka (87 mg g −1 ). If q e values are normalized for the surface areas (Table 3), the arsenic uptake of Aka and Fer are 0.43 and 0.27 mg m −2 , respectively, probably caused by the preferential orientation of akaganeite nanotubes along specific directions, which can have higher concentration of active sites. In addition, this study afforded the same experimental conditions and confirmed the higher efficiency of both oxyhydroxides (Figure 4), if compared to oxides, due to the higher density of superficial hydroxyl groups and surface area [35]. Finally, if Mag and Comp are compared by normalizing the q e values for the active phases ( Figure S13), their efficiency is similar, and in some cases higher for Comp, indicating the complete accessibility of the iron oxide inside the pores. Indeed, the ideal advantage in dispersing an active phase in porous silica may reflect higher chemical and mechanical stability and the possibility to modify the silica walls with other kinds of functional groups and/or active inorganic phases [35,68]. Conversely, one should evaluate the cost of producing such sorbents and the possible issues related to secondary silicon pollution [39]. As reported in Table S4, silicon was found after the adsorption tests, with concentrations that increase with the pH.
Regarding the adsorption of As III (C 0 = 100 mg L −1 ), all the samples display a lower arsenic removal at pH 0 2, then an increase and a steady behavior in the pH 0 range 3-8 ( Figure 4b, Table S5), as already observed in the literature for akageneite in this pH range [29]. The different behavior, if compared to As V , is explained by the existence of neutral species (H 3 AsO 3 ) up to pH 8, whose uptake is not affected by the surface charge of the sorbents ( Figure S6) [25]. In this range, the most efficient sample becomes Fer, having removals close to 96% and an adsorbed amount of about 50 mg g −1 , higher than that of Aka (As III removal 80%, q e = 36 mg g −1 ). This result indicates that arsenious acid does not diffuse well inside Aka nanotubes, probably due to the absence of attractive electrostatic forces, indicating that not all of the akaganeite surface is available for As III uptake, in contrast with Fer. The evaluation of the effect of the contact between the sorbents and the As III solution on the pH ( Figure S14) revealed a similar behavior when compared to the As V one, with some differences. For instance, the pH decrease, for Aka, at pH 0 8 is more significant, probably due to the absence of buffer effects from arsenite species ( Figure S6). For, Fer, Mag, and Comp, the discrepancy between the pH values is less important. Only small differences can be identified in the comparison with the As V adsorption. For instance, at pH 0 8, a decrease in pH Int is visible, caused by the iron oxide itself ( Figure S10).
The adsorbed amount normalized for the active phase for the sample Comp is lower if compared to Mag, contrary to what was observed for As V removal ( Figure S15). Again, this result can be justified considering that the diffusion of As III species through the silica mesochannels is not favored due to the absence of attractive electrostatic forces, similar to what was observed for Aka. Concerning the secondary silicon pollution (Table S5), the comparison between As III and As V tests reveals that the phenomenon is limited in the case of the As III species, and the silicon release is mainly affected by the pH, probably due to a weaker interaction of arsenite with the sorbent. On the contrary, the Si release observed for the As V removal tests indicated that arsenate species play a crucial role, as already observed in a previous study [39], beyond a pH effect.
As for As V uptake, the As III removal was studied under sorbent saturation condition (C 0 = 500 mg L −1 ) for the two oxyhydroxides (Figure 4d, Figures S16 and S17). Similar results to the 100 mg L −1 tests were found in the arsenic uptake (but with a higher absolute adsorbed dose), in the pH Int and pH Fin trends, as a function of the pH 0 . The maximum values were reached at pH 0 8, equal to 91 mg g −1 and 144 mg g −1 for Aka and Fer, respectively. The adsorbed amount normalized for the surface area results higher for Fer with respect to Aka in the whole pH range, supporting the idea of a reduction in the available surface-active sites for Aka due to the absence of diffusion of arsenious acid inside the nanotubes.
FTIR spectra acquired after As III and As V adsorption on Aka and Fer are reported in Figure S18. For Aka, the libration OH-Cl at 845 cm −1 becomes less visible due to the appearance of a new band associated with the As-O stretching at 815 cm −1 [25,26,34]. Concerning Fer, the As-O band is located at 790 cm −1 , indicating weaker binding if compared with Aka. Furthermore, there is a strong reduction of the bands at 1500, 1330, 1065, and 850 cm −1 , ascribed to Fe-OH (Figure S18b), upon As adsorption. For both Fer and Aka, the As-O stretching band for As V adsorption is stronger than As III , probably due to the involvement of a different number of As-O bonds [16].
Hence, even though in the literature there are studies devoted to As V and/or As III removal by both ferrihydrite [23,[31][32][33][34] and akageneite [24][25][26][27][28][29], the differences in the experimental conditions hinder a comparison between them. Therefore, the evaluation of the most efficient oxyhydroxide is not straightforward, and, to the best of our knowledge, the current work is the first example of a direct comparison. Even though Aka features an As III uptake lower than that of Fer, it can be considered the most promising sample. In fact, it should be noted that the amount of As III is generally much lower than that of As V in aerobic environments [68]. Moreover, Aka can efficiently remove both As III and As V species in the whole pH 0 range (2-8).
Effect of Initial Concentration and Isotherm Modelling on the Adsorption of As V by Akaganeite
To deepen the arsenic removal mechanism for the most promising sample, Aka, the adsorption of the As V species, which is more sensitive to pH with respect to As III ones in the pH range 3-8 ( Figure S6), was studied under different initial concentrations (10-500 mg L −1 ), contact time (10-960 min), ionic strength (NaCl 0-1M), and presence of competitors (sulphate, phosphate) at pH 0 3 and 8 ( Figures 5-7). Concerning the initial As V concentration effect, both at pH 0 3 and 8, it is possible to observe a sharp increase in the adsorbed dose and then an almost steady behavior ( Figure 5). When pH 0 is 8, the pH int drastically decreases to 3 for C 0 = 10 and 50 mg L −1 (Table S6, Figure S19). As the initial concentration increases, the pH drop is less critical due to the buffer effect of the arsenate species present at higher concentrations. For instance, at 150 mg L −1 the pH goes down from 8 to 6.5, and at 250 mg L −1 from 8 to 7.3. Consequently, the adsorbed dose is lower if compared to the tests made at pH 0 = 3. The adsorbed dose vs. the equilibrium concentration (q e vs. C e ) plot was fitted with different isotherm models, namely Langmuir, Freundlich, Temkin, Redlich-Peterson, and Dubinin-Radushkevich, as described in the experimental section. The parameters are reported in Table 4. with Aka. Furthermore, there is a strong reduction of the bands at 1500, 1330, 1065, and 850 cm −1 , ascribed to Fe-OH ( Figure S18b), upon As adsorption. For both Fer and Aka, the As-O stretching band for As V adsorption is stronger than As III , probably due to the involvement of a different number of As-O bonds [16]. Hence, even though in the literature there are studies devoted to As V and/or As III removal by both ferrihydrite [23,[31][32][33][34] and akageneite [24][25][26][27][28][29], the differences in the experimental conditions hinder a comparison between them. Therefore, the evaluation of the most efficient oxyhydroxide is not straightforward, and, to the best of our knowledge, the current work is the first example of a direct comparison. Even though Aka features an As III uptake lower than that of Fer, it can be considered the most promising sample. In fact, it should be noted that the amount of As III is generally much lower than that of As V in aerobic environments [68]. Moreover, Aka can efficiently remove both As III and As V species in the whole pH0 range (2-8).
Effect of Initial Concentration and Isotherm Modelling on the Adsorption of As V by Akaganeite
To deepen the arsenic removal mechanism for the most promising sample, Aka, the adsorption of the As V species, which is more sensitive to pH with respect to As III ones in the pH range 3-8 ( Figure S6), was studied under different initial concentrations (10-500 mg L −1 ), contact time (10-960 min), ionic strength (NaCl 0-1M), and presence of competitors (sulphate, phosphate) at pH0 3 and 8 ( Figures 5-7). Concerning the initial As V concentration effect, both at pH0 3 and 8, it is possible to observe a sharp increase in the adsorbed dose and then an almost steady behavior ( Figure 5). When pH0 is 8, the pHint drastically decreases to 3 for C0 = 10 and 50 mg L −1 (Table S6, Figure S19). As the initial concentration increases, the pH drop is less critical due to the buffer effect of the arsenate species present at higher concentrations. For instance, at 150 mg L −1 the pH goes down from 8 to 6.5, and at 250 mg L −1 from 8 to 7.3. Consequently, the adsorbed dose is lower if compared to the tests made at pH0 = 3. The adsorbed dose vs. the equilibrium concentration (qe vs. Ce) plot was fitted with different isotherm models, namely Langmuir, Freundlich, Temkin, Redlich-Peterson, and Dubinin-Radushkevich, as described in the experimental section. The parameters are reported in Table 4. Table S6. Table S6.
adsorbed for the pH0, indicating rapid reactions (Table S7). The qt vs. t 1/2 plots (Figure 6c) were fitted by the intraparticle diffusion model in two different steps, which account for two different adsorption mechanisms. The first one is associated with a faster adsorption process (diffusion of arsenate from the solution to the Aka surface), featuring the highest constant at both pH0 (ki) and ending at about 60-120 min. The second step, almost parallel to the x-axis, corresponds to a slower uptake that takes place once the sorbent surface is enriched by arsenate species. Table S7.
Effect of Added Salts as Competitors the Adsorption of As V by Akaganeite
Since it is known that ionic strength affects the adsorption capacity, tests at pH0 3 and 8, in monolayer sorbent saturation condition (C0 = 250 mg L −1 ), were performed by varying NaCl concentration in the range 0-1 mol L −1 (Table S9, Figure 7a). For pH0 3, the As V uptake was almost constant, with just a small decrease with the increase in the NaCl concentration (−7 mg g −1 ). On the contrary, a slight increase was observed at pH0 8 that increased the ionic strength (+14 mg g −1 ). This behavior was also observed by other authors [24], who hypothesized an increase in the surface charge due to the adsorption of cations (K + in their case instead of Na + ) at basic pH, and a consequent increase in arsenate removal capacity. This phenomenon does not occur at acidic pH due to the repulsion between the superficial ≡FeOH2 + species and the cations in the solution. On the contrary, the attraction of anions from the solution might occur, leading to a slight worsening of the removal performance. The pH was also slightly affected by NaCl in the solution (ΔpH = +0.1 for a change of one order of magnitude in the molarity), regardless of the presence of arsenate species ( Figure S10). This change is strictly related to the chloride ions since the presence of NaNO3 did not affect the pH in the same way ( Figure S10). Indeed, the presence of chloride in the solution hinders the release of Cl − and H + from the akaganeite channels toward the solution [30,66]. Table S7. Tables S9-S11.
With the aim of monitoring the As V uptake with the presence of competitors, sulphate and phosphate were tested at different concentrations, in 1:1, 10:1, and 100:1 molar ratios with respect to arsenate, corresponding to 0.003, 0.033, and 0.334 mol L −1 of competitor concentration, respectively (Tables S10 and S11). The tests were conducted at both pH0 3 and 8, with an initial arsenic concentration equal to 250 mg L −1 . The results (Figure 7b,c) show that sulphate features at pH0 8 have a similar behavior to what was observed for NaCl, with a slight improvement (+8 mg g −1 ), but a higher adsorption decrease at pH0 3 is observed (−27 mg g −1 ), probably due to the doubled charge of sulphate anions with respect to chlorides. Conversely, the phosphate causes a drastic decrease in arsenic removal capacity at both pHs (−70 mg g −1 at pH0 3, −48 mg g −1 at pH0 8), as already observed [30]. This reduction is due to the chemical similarities between phosphate and arsenate for the superficial akaganeite active sites that should create a strong bond through inner-sphere complexation. On the contrary, outer-sphere complexes featuring water molecules between ligands and metal ions are found in the case of sulphate and chloride, which do not strongly influence arsenic adsorption. The presence of competitors also influenced the pH after the adsorption test (pHfin). In the case of sulphate, there is no substantial change whether this ion is present or not, and a decrease in pH is observed. When phosphate is If R 2 values are considered, the q e vs. C e tendency is better described, for both pH values, by the Redlich-Peterson model, which is a hybrid between the Langmuir and Freundlich models, accounting for energetically equivalent or non-equivalent binding sites on the sorbent active-phase, respectively. In the literature, some articles reported isotherms fitted by the Langmuir model in an equilibrium concentration range of 0-70 mg L −1 at a pH range close to neutrality [24,26,30,35]. Nevertheless, some of these authors underlined that both the models are appropriate to describe the As V adsorption on akageneite, with only slight differences in the obtained R 2 values [24]. Moreover, other works report the Freundlich model to best describe the isotherm adsorption of As III on akageneite [29], or As III /As V on ferrihydrite [29,31,32]. Therefore, our results (i.e., better fit by Redlich-Peterson model) suggest that both monolayer adsorption and heterogenous surfaces may coexist. A possible alternative interpretation for the isotherm at pH 0 3 consists in a change of the sorbent surface during the adsorption process as a function of the concentration. Indeed, up to a certain critical concentration (i.e., C 0 = 250 mg L −1 , C e = 75 mg L −1 ) the best-fitting isotherm model is the Langmuir one, indicating the filling of free, energetically equivalent active sites. Then, for higher concentrations, a better agreement of the experimental data with the Freundlich/Temkin ones is observed, coherent with the formation of adsorbate multilayers or non-energetically equivalent As-O-Fe bonds. This phenomenon is not visible when the pH of the starting solution is 8, where the experimental data well follows the Langmuir model, and probably it is missed in the literature due to the differences in the investigated equilibrium concentration range and pH. Despite the Redlich-Peterson model seems to be the most suitable, the maximum loading estimated by the Langmuir (and Dubinin-Radushkevich) model is about 80 mg g −1 at pH 0 3 and 50 mg g −1 at pH 0 8 (Table 4), indicating a higher efficiency at acidic pH, in agreement with the results previously presented. Nevertheless, a higher removal was achieved at pH 0 3 for the highest initial As V concentration (89 mg g −1 , Table S6) that again can be justified by the presence of non-equivalent active sites, not described by the Langmuir model. Table 4. Isotherm fitting parameters for adsorption of As V onto Aka at pH 3 and 8.
β RP E ads (kJ mol −1 ) The FTIR spectra of Aka ( Figure S18c) reveal how the As-O stretching band becomes more intense as the starting arsenic concentration increases from 100 mg L −1 to 250 mg L −1 , while the Fe-OH stretching band at 1360 cm −1 disappears [26,34].
Effect of Contact Time and Kinetic Modelling on the Adsorption of As V by Akaganeite
The adsorption kinetics were studied at pH 0 3 and 8, in the contact time range 10-960 min, employing a starting As V concentration of 250 mg L −1 (Table S7). This As V concentration was chosen to be high enough for the arsenate buffer to resist the pH drop caused by the sorbent at pH 0 8, but not too much to generate multilayer phenomena (monolayer sorbent saturation condition). The adsorbed dose at a specific time versus time plots (q t vs. t, Figure 6a) were fitted with the pseudo 1st and 2nd models, the latter one better fitting the experimental data, as also evidenced by the linearized plots in Figure 6b and Figure S20. The equilibrium adsorbed amount (q e calc ), obtained from the fitting with the pseudo 2nd model, is close to the experimental one, obtained already after 120 min of contact time (Table S8). Moreover, after 10 min, 80% of the removable arsenic is already adsorbed for the pH 0 , indicating rapid reactions (Table S7). The q t vs. t 1/2 plots (Figure 6c) were fitted by the intraparticle diffusion model in two different steps, which account for two different adsorption mechanisms. The first one is associated with a faster adsorption process (diffusion of arsenate from the solution to the Aka surface), featuring the highest constant at both pH 0 (k i ) and ending at about 60-120 min. The second step, almost parallel to the x-axis, corresponds to a slower uptake that takes place once the sorbent surface is enriched by arsenate species.
Effect of Added Salts as Competitors the Adsorption of As V by Akaganeite
Since it is known that ionic strength affects the adsorption capacity, tests at pH 0 3 and 8, in monolayer sorbent saturation condition (C 0 = 250 mg L −1 ), were performed by varying NaCl concentration in the range 0-1 mol L −1 (Table S9, Figure 7a). For pH 0 3, the As V uptake was almost constant, with just a small decrease with the increase in the NaCl concentration (−7 mg g −1 ). On the contrary, a slight increase was observed at pH 0 8 that increased the ionic strength (+14 mg g −1 ). This behavior was also observed by other authors [24], who hypothesized an increase in the surface charge due to the adsorption of cations (K + in their case instead of Na + ) at basic pH, and a consequent increase in arsenate removal capacity. This phenomenon does not occur at acidic pH due to the repulsion between the superficial ≡FeOH 2 + species and the cations in the solution. On the contrary, the attraction of anions from the solution might occur, leading to a slight worsening of the removal performance. The pH was also slightly affected by NaCl in the solution (∆pH = +0.1 for a change of one order of magnitude in the molarity), regardless of the presence of arsenate species ( Figure S10). This change is strictly related to the chloride ions since the presence of NaNO 3 did not affect the pH in the same way ( Figure S10). Indeed, the presence of chloride in the solution hinders the release of Cl − and H + from the akaganeite channels toward the solution [30,66].
With the aim of monitoring the As V uptake with the presence of competitors, sulphate and phosphate were tested at different concentrations, in 1:1, 10:1, and 100:1 molar ratios with respect to arsenate, corresponding to 0.003, 0.033, and 0.334 mol L −1 of competitor concentration, respectively (Tables S10 and S11). The tests were conducted at both pH 0 3 and 8, with an initial arsenic concentration equal to 250 mg L −1 . The results (Figure 7b,c) show that sulphate features at pH 0 8 have a similar behavior to what was observed for NaCl, with a slight improvement (+8 mg g −1 ), but a higher adsorption decrease at pH 0 3 is observed (−27 mg g −1 ), probably due to the doubled charge of sulphate anions with respect to chlorides. Conversely, the phosphate causes a drastic decrease in arsenic removal capacity at both pHs (−70 mg g −1 at pH 0 3, −48 mg g −1 at pH 0 8), as already observed [30]. This reduction is due to the chemical similarities between phosphate and arsenate for the superficial akaganeite active sites that should create a strong bond through inner-sphere complexation. On the contrary, outer-sphere complexes featuring water molecules between ligands and metal ions are found in the case of sulphate and chloride, which do not strongly influence arsenic adsorption. The presence of competitors also influenced the pH after the adsorption test (pH fin ). In the case of sulphate, there is no substantial change whether this ion is present or not, and a decrease in pH is observed. When phosphate is employed, its buffer effect stabilizes the pH, avoiding the decrease [27,30].
FTIR spectra of the sorbents after the tests reveal the presence of the bands associated with As-O (813 cm −1 ), P-O (1030 cm −1 ), and S-O (1112 cm −1 ), and the disappearance of the Fe-OH band at 1360 cm −1 ( Figure S18d).
Conclusions
In this work, a head-to-head comparison of the As V and As III removal ability of iron oxyhydroxides (akaganeite and ferrihydrite) and oxides (Fe 2 O 3 in the form of NPs and dispersed in a meso/macroporous silica matrix) in the pH range 2-8 are provided. Emphasis was devoted to studying the arsenic solution pH before the contact with the sorbents, soon after it, and at the end after the tests. The oxyhydroxides featured higher performances compared to the oxides in all the cases. In particular, akaganeite had higher As V uptake (89 mg g −1 at pH 0 3 and 52 mg g −1 at pH 0 8) when compared with ferrihydrite, both in acidic and basic environments, thanks to the capability to decrease the initial pH, where the surface charge is high and positive. Concerning the As III removal , elevated and steady uptake in the pH 0 range 2-8 was found for ferrihydrite (≈95% at 100 mg L −1 , q e = 144 mg g −1 at 500 mg L −1 and pH 0 8), which was higher than akaganeite (≈80% at 100 mg L −1 , q e = 91 mg g −1 at 500 mg L −1 and pH 0 8). The steady behavior in the whole pH range was justified taking into account the presence of the neutral species H 3 AsO 3 , which is not affected by the surface charge of the sorbents, and, therefore, does not diffuse inside the akaganeite nanotubes. Finally, the iron oxide-porous silica composite featured similar performances for As V uptake compared to the bare maghemite, indicating complete accessibility of active sites inside the pores, but dropped down for As III due to the absence of electrostatic interactions between arsenious acid and iron oxide NPs within the pores. Further details on the adsorption of As V on akageneite were obtained by studying the effect of initial concentration, contact time, ionic strength, and presence of competitors. The isotherm plots were best fitted with the Redlich-Peterson model, indicating the presence of energetically equivalent and non-equivalent active sites, especially at pH 0 3, where a multilayer may form when the starting concentration exceeds 250 mg L −1 . The adsorption kinetics at both pH 0 3 and 8 was fast and interpreted as pseudo second order, with the equilibrium reached after 120 min. The formation of outer-sphere complexes when electrolytes, such as NaCl and Na 2 SO 4 , are used can cause a slight increase in the removal performances at basic pH 0 and a decrease at acid ones, higher in the case of sulphate. On the contrary, the formation of inner-sphere complexes in the case of phosphate anions affected the arsenic uptake, ultimately hindering it when present in high concentrations (As:P molar ratio = 1:100).
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nano12030326/s1. Figure S1: N 2 -physisorption isotherms (left) and BJH-calculated pore size distributions (right) of the silica-based samples. Figure S2: N 2 -physisorption isotherms of the iron oxide sorbents. Figure S3: TEM and HRTEM micrographs of Aka nanorods. Figure S4: TEM micrograph of Aka and magnification to highlight the small nanoparticles. Figure S5: Rietveld refinement of XRD patterns of the samples. Inset of crystal shape modelled from Popa rule for the Aka sample. Figure S6: Bjerrum plot of arsenate (left) and arsenite (right) species reconstructed employing the dissociation constants of arsenic acid (pKa 1 = 2.20; pKa 2 = 6.67; pKa 3 = 11.53) and arsenious acid (pKa 1 = 9.23; pKa 2 = 12.13; pKa 3 = 13.40). Figure S7: Reactions of the sorbent surface in water. Figure S8: Possible reactions between the sorbent surface and arsenate species in water. Figure S9. Evolution of initial pH (pH 0 ), intermediate pH (pH Int ) and final pH (pH Fin ) for various starting pH (pH 0 ) for the sorbents with initial concentration of As V equal to 100 mg L −1 . Figure S10: pH of solution after contact with sorbents. Figure S11: Evolution of initial pH (pH 0 ), intermediate pH (pH Int ) and final pH (pH Fin ) for various starting pH (pH 0 ) for the sorbents with initial concentration of As V equal to 500 mg L −1 . Figure S12: Adsorption capacity from batch adsorption experiments with 500 mg L −1 As V solution on Aka (black) and Fer (blue) at different initial pH (pH 0 ). Figure S13: Adsorption capacity from batch adsorption experiments with 100 mg L −1 As V solution on maghemite and composite normalized for its active phase (28.3%) at different initial pH (pH 0 ). Figure S14: Evolution of initial pH (pH 0 ), intermediate pH (pH Int ) and final pH (pH Fin ) for various starting pH (pH 0 ) for the sorbents with initial concentration of As III equal to 100 mg L −1 . Figure S15: Adsorption capacity from batch adsorption experiments with 100 mg L −1 As III solution on maghemite and composite normalized for its active phase (28.3%) at different initial pH (pH 0 ). Figure S16: Adsorption capacity from batch adsorption experiments with 500 mg L −1 As III solution on Aka (black) and Fer (blue) at different initial pH (pH 0 ). Figure S17: Evolution of initial pH (pH 0 ), intermediate pH (pH Int ) and final pH (pH Fin ) for various starting pH (pH 0 ) for the sorbents with initial concentration of As III equal to 500 mg L −1 . Figure S18: FTIR spectra of akaganeite and ferrihydrite after arsenic removal. Figure S20: Sorption kinetics of As V on Aka at pH 3 and 8 fitted by linearized pseudo 1st order fitting. Table S1: Experimental parameters for adsorption tests. The sorbent amount was 50 mg and solution volume 20 mL for all the tests. Table S2: Hyperfine parameters obtained by fitting procedure of the 57 Fe Mössbauer spectra of the sorbents. Table S3: FTIR bands of the sorbents. Table S4: Batch experiments results of the sorbents at initial concentration of 100 or 500 mg L −1 of As V at various pH. Volume of the contaminant was 20 mL and adsorption time was 16 h. Table S5: Batch experiments results of the sorbents at initial concentration of 100 or 500 mg L −1 of As III at various pH. Volume of the contaminant was 20 mL and adsorption time was 16 h. Table S6: Batch experiments results of the sorbents at various initial concentration of of As V at pH 3 and 8. Volume of the contaminant was 20 mL and adsorption time was 16 h. Figure S19: Evolution of initial pH (pH 0 ), intermediate pH (pH Int ) and final pH (pH Fin ) (left) and ∆pH between intermediate and initial pH (right) at various As V initial concentration for Aka. Table S7: Batch experiments results of the sorbents at various contact time with As V at pH 3 and 8. Volume of the contaminant was 20 mL initial concentration of about 250 mg L −1 . Table S8: Linear pseudo 2nd order and intraparticle diffusion models fitting parameters for adsorption of As V onto Aka at pH 0 3 and 8. Table S9: Batch experiments results of the sorbents at different ionic strengh with As V at pH 3 and 8. Volume of the contaminant was 20 mL initial concentration of about 250 mg L −1 . pH 0 is the pH of the solution before contact with the sorbent. Table S10: Batch experiments results of the sorbents at different sulfate competitor concentration with As V at pH 3 and 8. Volume of the contaminant was 20 mL initial concentration of about 250 mg L −1 . Table S11: Batch experiments results of the sorbents at different phosfate concentration with As V at pH 3 and 8. Volume of the contaminant was 20 mL initial concentration of about 250 mg L −1 . | 16,196.6 | 2022-01-20T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Gsm Based Low Cost Smart Irrigation System with Wireless Valve Control
: In this paper presents to optimize the cost of the irrigation system and water consumption for agricultural crop based on a wireless network, that are Internet of Things (IoT) radio communications. The system consists of smart mobile phone for surveillance, the motor controller unit and the field controller unit. The SIM 900 GSM module is available in motor controller unit (PIC16F877A). Information from the field controller unit such as soil moisture, land humidity, temperature is sent to the motor controller unit through Radio & Communication. From motor controller unit the information is sent to the registered mobile number through GSM module. A Command can be sent from the mobile by GSM message to control the valves and motor.
Introduction
Farmers are the backbone of our country. The farmers should be given first priority in all aspects. India is a country where most of the people are dependent on agricultural activities. Cultivation and farming are considered as the back bone of the country. Irrigation plays an important role in agriculture. Farmers are facing many problems during irrigation process. In many of the states in India, electrical power is not provided properly to the farmers for their irrigation purpose. Due to this, frequent interruption of power, low voltage profile and water shortage. To avoid the above such problems, the smart irrigation system will play a vital for the irrigation purpose. Here, Internet of Things based low cost, smart irrigation system will be proposed in this project. To Experiment the possibilities of this project is to on /off the AC motor through smart mobile phone. This project is very useful to control the AC motor used in the application through wireless communication. By controlling the AC motor from anywhere in the captured area. ZIGBEE based wireless valve control is also used. The objectives of this paper were to control the water motor automatically and select the direction of the flow of water in pipe with the help of soil moisture sensor. Finally, send the information (operation of the motor and direction of water) of the farm field to the mobile message and g-mail account of the user.
Control Strategies
The main block diagram consists of the Mobile communication, GSM interface circuit, Solenoid control valve, LCD display, Radio frequency interface, Power supply, Keyboard interface, Single phase preventer, Driver circuit, Temperature and humidity sensor and Soil sensor.
Mobile Communication
GSM is a module to the isolated operation system, RF process, baseband process and the purpose module providing standard interfaces, which integrated RF chips of GSM, baseband chips, memory, and amplifier on the same circuit board. Designers make the microcomputer communicate with GSM module by RS232 serial port and also use the standard AT instructions to control GSM module to realize all kinds of communication, for example, sending messages, and making telephone and GPRS dial internet. But the function of sending a message is usually adopted to realize the long-range control just because of the low cost and well real-time.
Figure 1. Block diagram.
A GSM modem is a wireless modem that works with a GSM wireless network. A wireless modem behaves like a dial-up modem. The main difference between them is that a dial-up modem sends and receives data through a fixed telephone line while a wireless modem sends and receives data through radio waves.
A GSM modem can be an external device or a PC Card/PCMCIA Card. Typically, an external GSM modem is connected to a computer through a serial cable or a USB cable. A GSM modem in the form of a PC Card / PCMCIA Card is designed for use with a laptop computer. It should be inserted into one of the PC Card / PCMCIA Card slots of a laptop computer. Like a GSM mobile phone, a GSM modem requires a SIM card from a wireless carrier in order to operate.
As mentioned in earlier sections of this SMS tutorial, computers use AT commands to control modems. Both GSM modems and dial-up modems support a common set of standard AT commands. You can use a GSM modem just like a dial-up modem.
In addition to the standard AT commands, GSM modems support an extended set of AT commands. These extended AT commands are defined in the GSM standards. With the extended AT commands, you can do things like: 1. Reading, writing and deleting SMS messages. 2. Sending SMS messages. 3. Monitoring the signal strength. 4. Monitoring the charging status and charge level of the battery. 5. Reading, writing and searching phone book entries. The number of SMS messages that can be processed by a GSM modem per minute is very low --only about six to ten SMS messages per minute.
In this system, the GSM 300/1800 MHz network double band module made in Simcom Company is selected as GSM module. This module is capable to analyze band rate automatically and improve the performance of electronic public service. This module with energy save function, embedded TCP/IP and transparent mode belongs to the series of GPRS in three frequencies (900/1800/1900). The peripheral circuit of SIM300DZ mainly consists of the communication interfaces of SIM cassette and module, such as SIM-CLK and SIM I/O, which are the communication wires of module clock and data, SIM-RST and VCC, which are the reset and the power supply. What's more, the RXD and TXD are included in the peripheral circuit of SIM300DZ which connect with the serial port the MCU. It is the AT instructions that conveyed between the MCU and GSM through the very two channels. In addition the GSM module includes voice system channel and MIC channel. These channels are switched by MCU because of the AT instructions which are mainly applied to the switching between the voice and microphone in the monitor system. Finally, the transmitting ports IN+ and I are also included which are with the dual tone multi frequency (DTMF) signals. When the user communicates with the phone equipped in the car, if the button is pressed, it will produce a DTMF signal which is sent out to the multi frequency decode chip to analyze and produce Q signal through IN+ and IN-At this moment, the MCU decides how to operate according to the Q signal.
Solenoid Control Valve
A solenoid valve is an electromechanical valve for use with liquid or gas controlled by running or stopping an electrical current through a solenoid, which is a coil of wire, thus changing the state of the valve. The operation of a solenoid valve is similar to that of a light switch, but typically controls the flow of air or water, whereas a light switch typically controls the flow of electricity. Solenoid valves may have two or more ports. In the case of a two-port valve the flow is switched on or off; in the case of a threeport valve, the outflow is switched between the two outlet ports. Multiple solenoid valves can be placed together on a manifold Solenoid valves are the most frequently used control elements in fluidics. Their tasks are to shut off, release, dose, distribute or mix fluids. They are found in many application areas. Solenoids offer fast and safe switching, high reliability, long service life, good medium compatibility of the materials used, low control power and compact design. Besides the plunger-type actuator which is used most frequently, pivoted-armature actuators and rocker actuators are also used.
LCD Display
Liquid crystal displays (LCD's) have materials, which combine the properties of both liquids and crystals. Rather than having a melting point, they have a temperature range within which the molecules are almost as mobile as they would be in a liquid, but are grouped together in an ordered form similar to a crystal.
An LCD contains of two glass panels, with the liquid crystal material, sand witched in between them. The inner surface of the glass plates is coated with transparent electrodes which define the character, symbols or patterns to be displayed polymeric layers are present in between the electrodes and the liquid crystal, which makes the liquid crystal molecules to maintain a defined orientation angle.
One each polarizes are pasted outside the two glass panels. These polarizes would rotate the light rays passing through them to a definite angle, in a particular direction. When the LCD is in the off state, light rays are rotated by the two polarizes and the liquid crystal, such that the light rays come out of the LCD without any orientation, and hence the LCD appears transparent.
When satisfactory voltage is realistic to the electrodes, the liquid crystal molecules would be aligned in a specific direction. The light rays passing through the LCD would be rotated by the polarizes, which would result in activating / highlighting the desired characters. The LCD's are lightweight with only a few millimeters thickness. Since the LCD's consume less power, they are compatible with low power electronic circuits, and can be powered for long durations.
The LCD does not generate light and so light is needed to read the display. By using backlighting, reading is possible in the dark. The LCD's have long life and a wide operating temperature range. Changing the display size or the layout size is relatively simple which makes the LCD's more customers friendly.
The LCDs used exclusively in watches, calculators and measuring instruments are the simple seven-segment displays, having a limited amount of numeric data. The recent advances in technology have resulted in better legibility, more information displaying capability and a wider temperature range. These have resulted in the LCDs being comprehensively used in telecommunications and entertainment electronics. The LCDs have even started changing the cathode ray tubes (CRTs) used for the display of text and graphics, and also in small TV applications.
Crystalloids dot-matrix (alphanumeric) liquid crystal displays are available in TN, STN types, with or without backlight. The use of C-MOS LCD controller and driver ICs result in low power consumption. These modules can be interfaced with a 4-bit or 8-bit microprocessor /Micro controller. LCDs are used in similar applications where LEDs are used. These applications are displayed of numeric and alphanumeric characters in dot matrix and segment displays.
RF Transmitter and Receiver
Radio frequency, or RF, is a frequency or rate of oscillation within the range of about 3 Hz and 300 GHz. This range corresponds to frequency of alternating current electrical signals used to produce and detect radio waves. Since most of this range is beyond the vibration rate that most mechanical systems can respond to, RF usually refers to oscillations in electrical circuits.
The Electrical currents that oscillate at RF have special properties not shared by direct current signals. One such property is the ease with which it can ionize air to create a conductive path through the air. This property is exploited by 'high frequency' units used in electric arc welding. Another special property is an electromagnetic force that drives the RF current to the surface of conductors, known as the skin effect. Another property is the ability to appear to flow through paths that contain insulating material, like the dielectric insulator of a capacitor. The degree of the effect of these properties depends on the frequency of the signals.
RF transmitter and receiver are available for operation in the 868-870MHz band in Europe and the 902-928MHz band in North America, both modules combine full screening with internal filtering to ensure EMC compliance by minimizing spurious radiation and susceptibility.
These RF transmitter & receiver will suit one-to-one and multi-node wireless links in such applications as car and building security, EPOS and inventory tracking, remote industrial process monitoring and data networks. Because of their small size and low power requirements, both Modules are ideal for use in portable, battery-powered applications such as handheld terminals.
Encoder
In this circuit HT 640 is used as an encoder. The 3 18 encoders are a series of CMOS LSIs for remote control system application. They are capable of encoding 18 bits of information which consists of N address bit and 18-N data bits. Each address/data input is externally trinary programmable if bonded out. It is otherwise set floating internally. Various packages of the 3 18 encoders offer a flexible combination of programmable address/data is transmitted together with the header bits via an RF or an infrared transmission medium upon receipt of a trigger signal. The capability to select a TE trigger type further enhances the application flexibility of the 3 18 series of encoders. In this circuit the input signal to be encoded is given to AD7-AD0 input pins of the encoder. Here the input signal may be from keyboard, parallel port, microcontroller or any interfacing device. The encoder output address pins are shorted so the output encoded signal is the combination of (A0-A9) address signal and (D0-D7) data signal. The output encoded signal is taken from 8 th which is connected to an RF transmitter section.
RF Transmitter
Whenever a high output pulse is given to the base of the transistor BF 494, the transistor is conducting so tank circuit is oscillating. The tank circuit consists of L2 and C4 generating 35MHz carrier signal. Then the modulated signal is given LC filter section. After the filtration the RF modulated signal is transmitted through an antenna.
RF Receiver
The RF receiver is used to receive the encoded data which is transmitted by the RF transmitter. Then the received data are given to a transistor, which acts as an amplifier. Then the amplified signal is given to carrier demodulator section in which transistor Q1 is turned on and turn off conducting depends on the signal. Due to this the capacitor C14 is charged and discharged so carrier signal is removed and saw tooth signal is appearing across the capacitor. Then this saw tooth signal is given to the comparator. The comparator circuit is constructed by LM558.
The comparator is used to convert the saw tooth signal to exact square pulse. Then the encoded signal is given to decoder in order to get the decoded original signal.
Decoder
In this circuit HT648 is used as a decoder. The 3 18 decoder a series of CMOS LSIs for remote control system application. They are paired with 3 18 series of encoders. For proper operation a pair of encoder/decoder pair with the same number of address and data format should be selected. The 3 18 series of decoder receives the serial address and data from that series of encoders that are transmitted by a carrier using an RF or an IR transmission medium. It then compares the serial input data twice continuously with its local address. If no errors or unmatched codes are encountered, the input data codes are decoded and then transferred to the output pins. The VT pin also goes high to indicate a valid transmission.
The 3 18 decoders are capable of decoding 18 bits of information that consists of N bits of address and 18-N bits of data. To meet various applications they are arranged to provide a number of data pins whose range is from 0 to 8 and an address pin whose range is from 8 to 18. In addition, the 3 18 decoders provide various combinations of address/ data numbering different package.
In this circuit the received encoded signal is 9th pin of the decoder. Now the decoder separates the address (A0-A9) and data signal (D0-D7). Then the output data signal is given to a microcontroller or any other interfacing device.
Power Supply Block
The present chapter introduces the operation of power supply circuits built using filters, rectifiers and then voltage regulators. Starting with an AC voltage, a steady DC voltage is obtained by rectifying the AC voltage, then filtering to a DC level, and finally regulating to obtain a desired fixed DC voltage. The regulation is usually obtained from an IC voltage regulator unit, which remain the same if the input DC voltage varies or the output load connected to DC voltage changes. A block diagram containing the parts of a typical power supply and the voltage at various points in the unit is shown in the Figure 2.
The AC voltage, typically 120 V rms, is connected to a transformer, which steps that AC voltage down to the level for the desired DC output. A diode rectifier that provides a full-wave rectified voltage that is initially filtered by a simple capacitor filter to produce a DC voltage. This resulting DC voltage usually as some ripple or AC voltage variation. A regulated circuit can use this DC inputs to provide a DC voltage that not only has much less ripple voltage but also remains the same DC value even if the input DC voltage varies somewhat, or the load connected to the output DC voltage changes this voltage regulation is usually obtained using one of a number of popular voltage regulation IC unit. The Power supply unit consists of the following units. 1.
Three-Terminal Voltage Regulator
The Figure 3 shows the basic connection of a three terminal voltage regulator IC to a load. The fixed voltage regulator has an unregulated DC input voltage, from a second terminal, with a third terminal connected to ground. For a selected regulator, IC device specification lists a voltage range over which the input voltage can vary to maintain a regulated output voltage over a range of load current. The specification also lists the amount of output voltage change resulting from a change in load current (load regulation) or in input voltage (line regulation).
Keyboard Interface
The keyboard which is used in project to register the mobile number for sending the motor, valve open and close condition and soil dry and wet condition information via GSM module.
Single Phase Preventer
Protection of induction motors against single phasing or reverse phase or unbalance supply is one of the major problems in electrical systems. For safe running of 3-phase motors, special protections that keep a continuous watch on supply conditions are very essentials. The major cause of maximum motors burn-out is overloading which occurs due to unbalance supply or single phasing. Phase failure occurs in case of fuse blown-off, loose connections or loss of phase from the supply itself.
Driver Circuit
Driver circuits are most commonly used to amplify signals from controllers or microcontrollers in order to control power switches in semiconductor devices. Driver circuits often take on additional functions which include isolating the control circuit and the power circuit, detecting malfunctions, storing and reporting failures in the control system, serving as a precaution against failure, analyzing sensor signals, and creating auxiliary voltages.
Temperature and Humidity Sensor
A humidity sensor (or hygrometer) senses, measures and reports the relative humidity in the air. It measures both moisture and air temperature. Relative humidity is the ratio of actual moisture in the air to the highest amount of moisture that can be held at that air temperature. The warmer the air temperature is, the more moisture it can hold. Humidity / dew sensors use capacitive measurement, which relies on electrical capacitance. Electrical capacity is the ability of two nearby electrical conductors to create an electrical field between them. The sensor is composed of two metal plates and contains a non-conductive polymer film between them. This film collects moisture from the air, which causes the voltage between the two plates to change. These voltage changes are converted into digital readings showing the level of moisture in the air. The use of sensors can afford quantitative information to help guide and automate the decision-making process for irrigation. Such sensors include those that are generally used for weather stations as well as sensors to monitor the water status of the soil or substrate, and sensors that can be used to monitor and troubleshoot irrigation systems. Although collecting data with sensors is relatively easy, data are only useful if the sensors are used correctly and the limitations of sensors are understood. Optimizing the value of the collected data requires selecting the best sensor (s) for a particular purpose, determining the optimal number of sensors to be deployed, and assuring that collected data are as accurate and precise as possible. We designate general sensing principles and how these principles can be applied to a variety of sensors. Based on our experience, proper use of sensors can result in large increases in irrigation efficiency and improve the profitability of ornamental production in greenhouses and nurseries.
Simulation Circuit Diagram
This simulation diagram shows the simulation circuit diagram for our project. This circuit is constructed in proteus software. To control both motor pump and solenoid valve. During the normal condition period the motor is in off condition and the valve 1 & valve 2 are in closed position. And the sensor will sense the temperature of the soil condition and send information to the registered mobile via through GSM module. If any 1 of the land is in dry condition the information will be passed to the number and we are controlled the valve for to open. After the land gets wet condition and motor are turned off and valve are closed. In the display unit the information is displayed like the land conditions, valve conditions and the message received condition. In this software we constructed the circuits and checked all the running conditions. The AC voltage 220V is connected to a transformer, which steps that AC voltage down to the level of the 5V desired AC output. A rectifier provides a full-wave rectified voltage that is initially filtered by a simple capacitor filter to produce a DC voltage. This resulting DC voltage, usually has some ripple or AC voltage variation. A regulator circuit removes the ripples and also remains the same DC value even if the input DC voltage varies, or the load connected to the output DC voltage changes.
Block Diagram-Land Side
In this project PIC16f877a is used as a main unit, which is used to process the signal that comes from GSM modem and all peripherals. The PIC microcontroller is used in this project to identify the phone call which is coming through the GSM modem. In that time the PIC microcontroller reduces the speed of the motor and inform the user that the arrival of a call by using an LCD. The 16x2 LCD is used as a display unit which is used for alerting the user, two PORTS are allocated for LCD connections.
The regulated 5V supply is connected to the GSM modem and all peripherals. IC MAX232 is used to communicate between the PIC microcontroller and the GSM modem. The LCD16x2 is used as a display unit which is used for alerting the user, two PORTS are allocated for LCD connections. The soil sensor will sense the land condition, whether the land is wet or dry. And send the information through SMS to the registered mobile number via GSM module. After this information is received, the motor and the valves are controlled by sending SMS to the GSM circuit. If the land is dry condition, first the corresponding valve gets opened the motor is turned ON by the control of registered number. During the running period, the land changed from dry condition to wet condition the motor is automatically turned OFF and the valve gets closed.
When the power supply is given to the circuit, the display will show the title of the project. After the power supply is turned ON, the display shows that to enter the mobile number for receiving information from the field side. The display shows the registered mobile number. After the mobile are registered, the display will shows whether the mobile number is stored or not.
The display shows the land condition, motor condition and temperature of the soil. The display show the information about land condition, soil condition and water level in well. And the information is passed to the registered mobile number via GSM module. The SMS is sent by registered number to the GSM module, it check and operates according to the command. The SMS are send to the GSM module, if it received it shows in the display. The wrong command are send to the GSM module, the display shows that as invalid format. The command is correct to turn ON the motor, it will be displayed in display unit.
Conclusion
IoT is brought forwards to use in automatic irrigation. Thus the project of the irrigation system and water consumption for agricultural crop based on wireless network that are IoT and radio communications. The photovoltaic power supply to the field unit. Any cell phone can send command to the controllers or collect information from the controller. Only during day time a valves can be controlled because of solar photovoltaic. GSM and radio frequency provides credible communication for the device. | 5,776.4 | 2017-12-22T00:00:00.000 | [
"Computer Science"
] |
REVIEW AND OUTLOOK ON KAON PHYSICS ∗
The status of kaon physics and its prospects are reviewed. A new round of experiment is taking data with the potential of making a significant step in sensitivity on many fronts by the end of the decade
Introduction
After more than seventy years from their discovery [1], kaons are still an important tool to address fundamental questions in particle physics. The compelling questions that we wish to answer using kaons include: -Are there any other sources of CP violation in addition to the complex phase of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix?
-To which extent is the lepton universality respected?
-Are there more than three generations of fundamental fermions?
-How far can we push the high-energy frontier looking at rare processes, is there new physics accessible from the study of loop-mediated meson decays?
-Can we test other fundamental symmetries such us CPT? To which extent?
The common denominator of the above questions is that they all require a solid theoretical foundation where clear questions can be formulated without too much ambiguity. In this respect, kaons are particularly interesting because the Standard Model (SM) expectations are precise and not affected by large hadronic uncertainties. It is to be noted that experiments in this area of particle physics tend to be innovative and not simply the incremental improvement of the previous generation. Conversely though, the field is mature and a significant effort is needed to challenge our understanding.
In conclusion, even if we have entered the Higgs Boson era, a strong kaon physics programme seems perfectly justified.
A new round of kaon experiments is taking data, KLOE-II at the Frascati Φ factory, KOTO at J-PARC and NA62 at CERN, together with OKA at Protvino and LHCb at the LHC promise to make steady progress in order to address the compelling questions listed above.
Direct CP violation
An important experimental endeavor took place two decades ago to measure direct CP violation in the kaon system. The quantity of interest is A non-zero value of Re / signals the direct CP violation, that is CP violation present in the decay of the neutral kaon and not limited to the mixing of the meson. To perform this important test about the origin of CP violation, experiments were made on both sides of the Atlantic [2,3]. The experimental situation was settled at the beginning of this century, when it was demonstrated that Re / is different from zero beyond doubt [4]: 16.6 ± 2.3 × 10 −4 . While the measurements firmly establish direct CP violation and rule out superweak models [5], a precise theoretical prediction within the SM remains difficult to obtain because of cancellations between electroweak (EW) and QCD penguin operators.
A useful SM formula to display the difficulty of the theoretical calculation is the following [6]: where Im λ t = Im V td V * ts = |V ub ||V cb | sin γ, isospin breaking corrections are parameterized by a = 1.017 andΩ eff = (14.8 ± 8.0) × 10 −2 , and B (1/2) 6 and B (3/2) 8 are the parameters describing the QCD and EW penguins which tend to cancel each other. It is obvious from the formula that the relative importance of the QCD and EW penguins can affect the SM prediction by an order of magnitude. According to dual QCD [6], the following inequality holds: B The first lattice calculation [7] gives Re / = 1.4 ± 6.9 × 10 −4 in agreement with the large-N methods and sizably below the experimental number by 2.1 standard deviations. The effect of pion rescattering on enhancing the QCD penguin and hence the SM prediction for Re / is still debated [8]. The situation is well-summarized with a picture (Fig. 1) and Table I, where a comparison between experiment and some theoretical results are compared. Hopefully, lattice QCD will be able to clarify the SM prediction soon. 1.9 ± 4.5 KNT [10] 1.1 ± 5.1 Pich et al. [8] 15.0 ± 7.0 BEF [11] 22.0 ± 8.0 Lattice QCD [7] 1.4 ± 6.9 3. K S → π 0 π 0 π 0 In the SM of quark mixing, the CP violation due to neutral kaon mixing effects is expected to be the same in K L and K S once the different lifetimes and quantum numbers of the final states are taken into account. A nice, clean test can be performed measuring the decay K S → π 0 π 0 π 0 which is purely CP violating. Progress is expected from the KLOE II Collaboration at the Frascati Φ factory. The prediction in the SM is where | | = (2.228 ± 0.011) × 10 −3 is the mixing parameter. Currently, the best limit is provided by the KLOE Collaboration [12]: BR(K S → π 0 π 0 π 0 ) ≤ 2.6×10 −8 at 90% C.L. A deviation from the predicted value would be a sign of CP violation beyond the complex phase present in the CKM matrix.
Test of CKM unitarity
With lattice QCD now able to reliably calculate decay constants and form factors of light masons, the precise testing of unitarity of the up-quark coupling to the down-type quarks can be performed with great precision testing the unitarity relation |V ud | 2 +|V us | 2 +|V ub | 2 = 1. In this relation, the coupling of the up to down quark is obtained from super-allowed 0 + → 0 + transitions, while the coupling of the up quark to the strange one is measured from the study of leptonic and semi-leptonic kaon decays. Following the analysis presented by Marciano and Blucher for the PDG, we can conclude that the unitarity is respected to a precision of |V ud | 2 + |V us | 2 + |V ub | 2 = 0.9995 ± 0.0005, providing a test of the SM radiative correction with a precision of a few % [4].
Rare kaon decays
The holy grail of kaon physics is represented by the K → πνν decays. The neutrino-antineutrino pair in the final state guarantees that long-distance contributions from electromagnetic interactions and hadronic uncertainties are small. This is in contrast with respect to the final states with pairs of charged leptons.
Rare decays proceeding through flavor changing neutral currents such as K → πνν are important because their Standard Model contribution appears only at loop level and it is strongly suppressed. Thus, any contribution due to undiscovered particles can affect the measurement but not the Standard Model prediction. Therefore, a discrepancy between the observed rate and the predicted one would signal something interesting beyond the Standard Model.
The SM prediction [13] reads As it can be seen in the formula above, it is noteworthy that the precision of the prediction is dominated by the uncertainty of a combination of CKM elements. Those elements will be better determined in the future (by experiments such as LHCb and Belle II). Therefore, the ultimate impact of the test will be defined by the achievable experimental precision and not by the purely theoretical error which is very small.
Evidence for this decay was obtained by experiments BNL-E787/E949 made with stopped kaons. The final result of these experiments is [14] BR K + → π + νν = 17.3 +11.5 −10.5 , which is consistent with the SM prediction but it is affected by a large error, still allowing large effects beyond the standard to be present. The probability that all the candidate events reported are due to background is not negligible and was quoted to be 10 −3 [14].
To bridge the gap between the theory and the experimental error, the NA62 experiment [15] at CERN has been built. It exploits the decay in flight technique and it has started data taking in 2016. Details about the experiments can be found in [15] and here only the main features are recalled. Protons accelerated by the CERN super proton synchrotron (SPS) to 400 GeV are slowly extracted and directed onto a 40 cm long Be target. A secondary beam of hadrons is selected with a mean momentum of 75 GeV/c and a momentum bite of 1%. About 6% of the hadrons are kaons, while the majority of the rest is composed by pions and protons. The beam is debunched in order to have as little as possible of the memory of the SPS RF structures. To tag the incoming kaons, the beam passes through a differential Cherenkov counter adjusted to give a positive response to kaons of 75 GeV/c. While only a small fraction of the beam is made of kaons, each particle has to be tracked because there is no way to distinguish the kaons that will decay from any other beam particle. To do so, a novel Si pixel detector (gigatracker) with 100 ps time resolution has been developed. After being tracked, the particles enter a long decay tank surrounded by large angle vetoes (LAVs) that veto photons from K + → π + π 0 decays. The useful decay region is approximately 60 m long depending on fiducial cuts. To avoid excessive multiple scattering, the tracking detectors (Straws) are installed and operated inside the decay tank under vacuum. Pion/muon separation is provided by a Ring Imaging Cherenkov (RICH), while hermetic coverage to photons in the forward region is provided by a liquid krypton calorimeter (LKr) and smaller shashlik calorimeters (IRC and SAV). Coverage is completed by a hadron sampling calorimeter (HASC).
The in-flight technique has the advantage to avoid the material of the stopping target and the disadvantage that only a small fraction of the beam kaons (10%) usefully decays. An additional bonus of the in-flight technique is the excellent control of backgrounds and a much larger useful detector acceptance.
The commissioning of the experiment was completed in 2016 and physics data taking is in progress and will last at least towards the end of 2018 when CERN enters a two-year long shutdown (LS2).
Data from the first physics run in 2016 have been analyzed and a first result of the search for K + → π + νν in flight was recently reported [16,17]. Based on one candidate event and a background expectation of 0.15 ± 0.09 background events, the upper limit BR(K + → π + νν) ≤ 14 × 10 −10 at 95% C.L. is placed. This is based on a statistics of only about 1% of the total expected NA62 statistics and already places competitive bounds on the kinematical region contained between the ππ and πππ thresholds. NA62 has accumulated already 20 times more data and assuming a successful 2018 data taking should have about 20 SM events on tape before the CERN long shutdown 2 (LS2).
As ultimate measurement of K rare decays, one should mention K L → π 0 νν. Currently this decay is studied by the KOTO experiment at J-PARC. The best limit still comes from the predecessor experiment E391a at KEK [18], which placed an upper limit of BR(K L → π 0 νν) ≤ 2.6×10 −8 at 90% C.L.
Lepton universality and lepton flavor violation
Hints of lepton non-universality have emerged from the analysis of B decays. More generally, a lot of investigations have taken place in the study of angular resolutions, effective operators and ratio of branching fractions of B decays in pairs of electrons and muons. While updates on the analyses are eagerly awaited, the questions stand out to which extent deviations are seen in other systems. In kaon physics, a good test is made comparing the ratio .
The most precise determination to date has been obtained by NA62 [19]: R K = 2.488 ± 0.010 × 10 −5 . This result is in a good agreement with the SM prediction and it will be further improved with new NA62 data. The main systematics of the measurement will be reduced thanks to the presence of a RICH detector and the absence of material in between the tracking stations.
With the discovery of neutrino oscillations, which implies that the lepton flavor is not conserved, the question arises whether the lepton flavor violation is also violated in the sector of charged leptons. Although the origin of possible flavor violation could be totally different between the case of neutral and charged leptons (in the case of the neutrinos, it is observable because the almost degenerate nature of neutrino masses), all efforts devoted to improve the experimental situation are worth pursuing. The data sample collected by NA62 promises to increase the existing limits of lepton flavor violation in kaon decays by an order of magnitude. | 2,951.2 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
Topological AdS/CFT
We define a holographic dual to the Donaldson-Witten topological twist of N=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=2 $$\end{document} gauge theories on a Riemannian four-manifold. This is described by a class of asymptotically locally hyperbolic solutions to N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} gauged supergravity in five dimensions, with the four-manifold as conformal boundary. Under AdS/CFT, minus the logarithm of the partition function of the gauge theory is identified with the holographically renormalized supergravity action. We show that the latter is independent of the metric on the boundary four-manifold, as required for a topological theory. Supersymmetric solutions in the bulk satisfy first order differential equations for a twisted Sp(1) structure, which extends the quaternionic Kähler structure that exists on any Riemannian four-manifold boundary. We comment on applications and extensions, including generalizations to other topological twists.
1 Introduction and outline The AdS/CFT correspondence is a conjectured duality relating certain quantum field theories (QFTs) to quantum gravity [1]. This typically relates a strong coupling limit in field theory to semi-classical gravity, and quantitative comparisons between the two sides usually rely on additional symmetries, such as supersymmetry or integrability. Starting with the work of [2], recently localization techniques in supersymmetric gauge theories defined on rigid supersymmetric backgrounds have led to new exact computations. Moreover, the appropriate strong coupling limits have been successfully matched to semi-classical gravity JHEP12(2017)039 calculations, in a variety of different set-ups. 1 On the other hand, localization in QFT originated in [4], where the topological twist was introduced to define a topological quantum field theory (TQFT). It is natural to then ask whether one can define and study holography in this topological setting. Indeed, what does gravity tells us about TQFT, and vice versa?
In this paper, we take some first steps in this direction.
Background
In [4], Witten gave a physical construction of Donaldson invariants of four-manifolds [5][6][7] as certain correlation functions in a TQFT. This theory is constructed by taking pure N = 2 Yang-Mills gauge theory and applying a topological twist: identifying a background SU(2) R-symmetry gauge field with the right-handed spin connection results in a conserved scalar supercharge Q, on any oriented Riemannian four-manifold (M 4 , g). The path integral localizes onto Yang-Mills instantons, and correlation functions of Q-invariant operators localize to integrals of certain forms over the instanton moduli space M. These are precisely Donaldson's invariants of M 4 . They are, under certain general conditions, independent of the choice of metric g on M 4 , but in general depend on the diffeomorphism type of M 4 . In particular, Donaldson invariants can sometimes distinguish manifolds which are homeomorphic but not diffeomorphic. That this is possible is because the instanton equations are PDEs, which depend on the differentiable structure. From the TQFT point of view, independence of the choice of metric follows by showing that metric deformations lead to Q-exact changes in the integrand of the path integral. For example, the stressenergy tensor is Q-exact, implying that the partition function is invariant under arbitrary metric deformations, and hence (formally at least) is a diffeomorphism invariant.
Donaldson-Witten theory is typically studied for pure N = 2 Yang-Mills, with gauge group G = SU (2) or G = SO (3). However, the topological twist may be applied to any N = 2 theory with matter, and also for any gauge group G . For example, G = SU(N ) Donaldson invariants were first studied in [8], with further mathematical work in [9]. In particular the latter reference contains some explicit large N results for the partition function on certain four-manifolds. The procedure of topological twisting may also be applied to theories with different amounts of supersymmetry, and in various dimensions. For example, the larger SU(4) R-symmetry of four-dimensional N = 4 Yang-Mills leads to three inequivalent twists [10]. Viewing the N = 4 theory as an N = 2 theory coupled to an adjoint matter multiplet, applying the Donaldson-Witten twist leads to a TQFT that is referred to as the "half-twisted" N = 4 theory. This theory is relevant for the construction in the present paper. The other two twists are the Vafa-Witten twist [11], and the twist studied by Kapustin-Witten in [12], relevant for the Geometric Langlands programme. Historically the development of Donaldson-like invariants took a rather different direction after the introduction of Seiberg-Witten invariants in [13]. The former may be expressed (conjecturally) in terms of the latter, but Seiberg-Witten theory is simpler and easier to compute with.
JHEP12(2017)039
The Donaldson-Witten twist of N = 2 gauge theories can be understood as a special case of rigid supersymmetry. Soon after Witten's paper, Karlhede-Roček interpreted the construction as coupling the gauge theory to a background (i.e. non-dynamical) N = 2 conformal gravity [14]. The background SU(2) R-symmetry gauge field is part of this gravity multiplet, and is embedded into the spin connection in such a way that the Killing spinor equations of the theory admit a constant solution, leading to the conserved scalar supercharge Q. There is also an auxiliary scalar field turned on in this background gravity multiplet, proportional to the Ricci scalar curvature of (M 4 , g). Motivated by the work of Pestun in [2], the last few years have seen considerable interest in defining rigid supersymmetry more generally on Riemannian manifolds. Unlike the topological twist, this generally requires the background d-manifold (M d , g) to possess some additional geometric structure, and correlation functions of Q-invariant observables then usually depend on this structure. For example, one can couple four-dimensional N = 1 theories with a U(1) R-symmetry to a background new minimal supergravity. Geometrically this construction requires (M 4 , g) to be a Hermitian four-manifold, with an integrable complex structure [15,16]. Generalizing [14], similarly N = 2 theories may be coupled to a background N = 2 conformal supergravity [17]. Generically this requires the existence of a conformal Killing vector on (M 4 , g), but the topological twist arises as a degenerate special case, in which (M 4 , g) is arbitrary.
An interesting application of these constructions is to the AdS/CFT correspondence. Here strong coupling (typically large rank N ) gauge theory computations are related to semi-classical gravity. The general idea is as follows. Rigid supersymmetry generically equips the background manifold (M d , g), on which the gauge theory is defined, with certain additional geometric structure, such as the integrable complex structure mentioned for four-dimensional N = 1 theories above. In the gravitational dual description one seeks solutions to an appropriate supergravity theory in d + 1 dimensions, where (M d , g) arises as a conformal boundary. That is, the (d + 1)-dimensional metric is asymptotically locally hyperbolic, approximated by dz 2 z 2 + 1 z 2 g to leading order in z near the conformal boundary at z = 0. A saddle point approximation to quantum gravity in this bulk then identifies (1.1) Here Z[M d ] denotes the partition function of the gauge theory defined on M d , while S[Y d+1 ] is the holographically renormalized supergravity action, evaluated on an asymptotically locally hyperbolic solution to the equations of motion of the (d + 1)-dimensional theory. The manifold M d = ∂Y d+1 is the conformal boundary, with the boundary conditions for supergravity fields on Y d+1 fixed by the rigid background structure of M d . The general AdS/CFT relation (1.1) is somewhat schematic, and both sides must be interpreted appropriately. For example, in order to make sense of the left hand side for topologically twisted four-dimensional N = 2 SCFTs it can be refined, as discussed in section 6.1. On the other hand, the sum on the right hand side of (1.1) is not well understood. One should certainly include all saddle point solutions on smooth manifolds Y d+1 . However, the existence of such a filling immediately implies that M d has trivial [18][19][20]) suggest that requiring Y d+1 to be smooth is in any case too strong: one should allow for certain types of singular fillings of (M d , g), and indeed these may even be the dominant contribution in (1.1) (especially for non-trivial topologies of M d ). There are some clear constraints, although no general prescription. 3 The supergravity action S typically scales with a positive power of N , and in the N → ∞ limit only the solution of least action contributes to (1.1) at leading order, with contributions from other solutions being exponentially suppressed.
Outline
In this paper we construct a holographic dual to the Donaldson-Witten twist of fourdimensional N = 2 gauge theories. As already mentioned, this twist may be interpreted as coupling the theory to a particular background N = 2 conformal gravity multiplet. On the other hand, four-dimensional N = 2 conformal gravity arises on the conformal boundary of asymptotically locally hyperbolic solutions to the Romans [22] N = 4 + gauged supergravity in five dimensions [23]. The real Euclidean signature version of this theory described in section 2 has, in addition to the bulk metric G µν , an SU(2) R-symmetry gauge field A I µ (I = 1, 2, 3), a one-form C, and a scalar field X. (In general there is also a doublet of B-fields, but this is zero for the topological twist boundary condition, and moreover may be consistently set to zero in the Romans theory.) The main property of a topological field theory is that appropriate correlation functions, including the partition function, are independent of any choice of metric. Assuming one is given an appropriate solution to the Romans theory with (M 4 , g) as conformal boundary, we therefore expect the holographically renormalized action to be independent of g.
Here one can mimic the field theory argument in [4], and attempt to show that arbitrary deformations g ij → g ij + δg ij leave this action invariant. We have the general holographic Ward identity formula Here S is the renormalized supergravity action of the Euclidean Romans theory, defined in section 2, while (g ij , A I i , X 1 ) are the non-zero background fields in the N = 2 conformal gravity multiplet for the topological twist. Equivalently, these arise as boundary values of the Romans fields: in particular A I i is simply the restriction of the bulk SU(2) Rsymmetry gauge field to the boundary at z = 0, while X 1 = lim z→0 (X − 1)/z 2 log z. For the topological twist these quantities are all fixed by the choice of metric g ij : A I i is fixed to be the right-handed spin connection, while X 1 = −R/12, where R = R(g) is the Ricci scalar for g. Thus the variations of these fields appearing in (1.2) are all determined by the 2 For example, in the case of interest in this paper d = 4, and Ω SO 4 ∼ = Z with the map to the integers being given by the signature σ( where p1 denotes the first Pontryagin class. A generator of Ω SO 4 ∼ = Z is the complex projective plane. 3 One might also speculate that the dominant contribution may come from complex saddle points; that is, from complex-valued metrics -see, for example, [21]. In this paper we focus on real solutions.
JHEP12(2017)039
metric variation δg ij . On the other hand, T ij , J i I and Ξ are respectively the holographic vacuum expectation values (VEVs) of the operators for which these boundary fields are the sources. In particular T ij is the holographic stress-energy tensor. As is well-known, the expansion of the equations of motion near z = 0 does not fix these VEVs in terms of boundary data on M 4 , but rather they are only determined by regularity of the solution in the interior. Determining these quantities for fixed boundary data is thus an extremely non-linear problem. What allows progress in this case is supersymmetry: the partition function should be described by a supersymmetric solution to the Romans theory. 4 By similarly solving the Killing spinor equations in a Fefferman-Graham-like expansion, we are able to compute these VEVs for a general supersymmetric solution. This still leaves certain unknown data, ultimately determined by regularity in the interior, but remarkably these constraints are sufficient to prove that (1.2) is indeed zero, for arbitrary δg ij ! More precisely, we show that the integrand on the right hand side is a total derivative, and its integral is then zero provided M 4 is closed, without boundary. The computation, although in principle straightforward, is not entirely trivial, and along the way we require some interesting identities that are specific to Riemannian four-manifolds (notably the quadratic curvature identity of Berger [24]). This is the main result of the paper, but it immediately raises a number of interesting questions. We postpone our discussion of these until later in the paper, notably at the end of section 4, and in sections 5 and 6.
The outline of the paper is as follows. In section 2 we define the relevant fivedimensional Euclidean N = 4 + gauged supergravity theory, and holographically renormalize its action S. In section 3 we show that on the conformal boundary of an asymptotically locally hyperbolic solution to this theory one obtains the supersymmetry equations [17] of Euclidean N = 2 conformal supergravity, which admits [14] the topological twist as a solution. We then expand the bulk supersymmetry equations in a Fefferman-Graham-like expansion. Section 4 contains the main proof that δS/δg ij = 0, while in section 5 we reformulate the supersymmetry equations in terms of a first order differential system for a twisted Sp(1) structure. On the conformal boundary this induces the canonical quaternionic Kähler structrure that exists on any oriented Riemannian four-manifold. This paper raises a number of interesting questions, prompting further computations, and the results may potentially be extended and generalized in a number of different directions. We comment on some of these issues in section 6.
Holographic supergravity theory
We begin in section 2.1 by defining a real Euclidean section of N = 4 + gauged supergravity in five dimensions. A Fefferman-Graham expansion of asymptotically locally hyperbolic solutions to this theory is constructed in section 2.2, for arbitrary conformal boundary four-manifold (M 4 , g). Using this, in section 2.3 we holographically renormalize the action.
Euclidean Romans N = 4 + theory
The Lorentzian signature Romans N = 4 + theory [22] is a five-dimensional SU(2) × U(1) gauged supergravity which admits a supersymmetric AdS 5 vacuum. It is a consistent truncation of both Type IIB supergravity on S 5 [25], and also eleven-dimensional supergravity on an appropriate class of six-manifolds N 6 [26]. The bosonic sector comprises the metric G µν , a dilaton φ, an SU(2) R Yang-Mills gauge field A I µ (I = 1, 2, 3), a U(1) R gauge field A µ , and two real anti-symmetric tensors B α µν , α = 4, 5, which transform as a charged doublet under U(1) R ∼ = SO(2) R . It is convenient to introduce the scalar field X ≡ e − 1 √ 6 φ and the complex combinations B ± ≡ B 4 ± iB 5 . The associated field strengths are F = dA, F I = dA I − 1 2 ǫ I JK A J ∧A K , and H ± = dB ± ∓iA∧B ± . We have set the gauged supergravity gauge coupling to 1. 5 The bosonic action and equations of motion in Lorentzian signature appear in [25]. However, as we are interested in holographic duals to TQFTs defined on Riemannian fourmanifolds, we require the Euclidean signature version of this theory. The Wick rotation in particular introduces a factor of i into the Chern-Simons couplings, leading to the Euclidean action Here R = R(G) denotes the Ricci scalar of the metric G µν , and * is the Hodge duality operator acting on forms. The associated equations of motion are: 6
4)
In general equations (2.2)-(2.6) are complex, and solutions will likewise be complex. However, note that setting iA ≡ C effectively 5 In addition we have rescaled the SU(2)R gauge field and the anti-symmetric tensors by a factor of 1/ √ 2, compared to [25].
6 Equation (2.3) incorporates a correction to the Lorentzian equation, in line with [26].
JHEP12(2017)039
removes all factors of i. We may then consistently define a real section of this Euclidean theory in which all fields, and in particular C and B ± = B 4 ± iB 5 , are real. We henceforth impose these reality conditions. Although globally A is a U(1) R gauge field in the original Lorentzian theory, after the above Wick rotation the real field C = iA effectively becomes an SO(1, 1) R gauge field. We may then think of C as a global one-form, but for which the theory has a symmetry C → C −dλ, for any global function λ. We denote the corresponding field strength as G ≡ dC = iF.
In the Lorentzian theory the fermionic sector contains four gravitini and four dilatini, which together with the spinor parameters ǫ all transform in the fundamental 4 representation of the Sp(2) R global R-symmetry group. The SU(2) × U(1) ⊂ Sp(2) gauge symmetry arises as a gauged subgroup. Since Sp(2) ∼ = Spin(5) it is natural to introduce the associated Clifford algebra Cliff(5, 0), with generators Γ A , A = 1, . . . , 5, satisfying {Γ A , Γ B } = 2δ AB . We then decompose I, J, K = 1, 2, 3, transforming in the 3 of SU(2), and α, β = 4, 5 in the 2 of U(1). In Euclidean signature the conditions for preserving supersymmetry are then the vanishing of the following supersymmetry variations of the gravitini and dilatini, respectively: where the covariant derivative is Here γ µ , µ = 1, . . . , 5, are generators of the Euclidean spacetime Clifford algebra, satisfying {γ µ , γ ν } = 2G µν , where recall G µν is the metric. Given the gauging it is natural to introduce the following choice of generators: where σ I are the Pauli matrices, and 1 2 denotes the 2 × 2 identity matrix. In particular notice that Γ 45 = iσ 3 ⊗ 1 2 squares to −1 4 , and we may write where the spinor doublets ǫ ± denote projections onto the ±i eigenspaces of Γ 45 , respectively. One then has (2.12)
JHEP12(2017)039
We next introduce the charge conjuguation matrix C for the Euclidean spacetime Clifford algebra. By definition γ * µ = C −1 γ µ C , and one may choose Hermitian generators γ † µ = γ µ together with the conditions C = C * = −C T , C 2 = −1. We may then define the following charge conjugate spinor in Euclidean signature It is straightforward to check that (ǫ c ) c = ǫ. Moreover, provided C = iA and B ± (and all other bosonic fields) are real, then one can show that ǫ satisfies the gravitini and dilatini equations (2.7), (2.8) if and only if its charge conjugate ǫ c satisfies the same equations.
Given this property, we may consistently impose the symplectic Majorana condition ǫ c = ǫ.
We will be interested in solutions that satisfy these reality conditions.
Fefferman-Graham expansion
In this section we determine the Fefferman-Graham expansion [27] of asymptotically locally hyperbolic solutions to this Euclidean Romans theory. This is the general solution to the bosonic equations of motions (2.2)-(2.6), expressed as a perturbative expansion in a radial coordinate near the conformal boundary.
We take the form of the metric to be [27] G µν dx µ dx ν = 1 where the AdS radius ℓ = 1, and in turn we have the expansion Here g 0 ij = g ij is the boundary metric induced on the conformal boundary M 4 at z = 0. It is convenient to introduce the inner product α, β between two p-forms α, β via where vol denotes the volume form, with associated Hodge duality operator * . The volume form for the five-dimensional bulk metric (2.14) is The determinant may then be expanded in a series in z, around that for g 0 , as follows Here we have denoted t (n) ≡ Tr (g 0 ) −1 g n , u (n) ≡ Tr (g 0 ) −1 h n and t (2,2) ≡ Tr (g 0 ) −1 g 2 2 .
JHEP12(2017)039
The remaining bosonic fields are likewise expanded as follows:
22)
A priori there are additional terms that appear in these expansions. However, these may either be gauged away, or turn out to be set to zero by the equations of motion, and we have thus removed them in order to streamline the presentation. We now substitute the above expansions into the equations of motion (2.2)-(2.6) and solve them order by order in the radial coordinate z in terms of the boundary data g 0 = g, X 1 , A I , a and b ± . This will leave a number of terms undetermined. For the Einstein equation (2.6) we will need the Ricci tensor of the metric (2.14): Here ∇ is the covariant derivative for g, and we have corrected the sign of R(g) ij and the right hand side of (2.25) compared to [28]. Examining first the equation (2.5) gives at leading order * g 0 b ± = ∓b ± , (2.26) so that the boundary B-fields b + , b − are required to be anti-self-dual and self-dual, respectively. At subleading orders one finds In particular notice that the first equation fixes b ± 1 in terms of boundary data, while the second equation determines only the anti-self-dual/self-dual parts of b ± 2 , respectively. An equation may also be derived for b ± 3 , although we will not need this in what follows. Next the gauge field equations (2.3), (2.4) determine
JHEP12(2017)039
in terms of boundary data, where the curvatures are f ≡ da, F I ≡ dA I − 1 2 ǫ I JK A J ∧ A K , and we have introduced a gauge covariant derivative with respect to the boundary SU(2) field: Dα I ≡ dα I − ǫ I JK A J ∧ α K . In addition we have the constraints which leave a 2 and a I 2 partially undetermined. Turning next to the scalar equation of motion (2.2) we find We regard these as determining X 3 , X 4 in terms of X 1 (a boundary field), and X 2 (which is undetermined by the equations of motion), together with the other fields in the expansion.
In the second equation we have used the definition Finally, we introduce the matter modified boundary Ricci tensor Notice the scalar curvature is R(g 0 ) = R(g 0 ), due to the opposite duality properties (2.26) of b ± . From the ij component of the Einstein equation (2.6), using (2.24) gives The right hand side is a matter modified form of the Schouten tensor. From this expression we immediately deduce the traces The zz component of the Einstein equation in (2.6), together with (2.23), determines the traces of higher order components in the expansion of the bulk metric:
JHEP12(2017)039
Returning to the ij component we may determine the logarithmic terms in (2.15): The structure of the ij component of the Einstein equation in four dimensions is such that g 4 always appears with zero coefficient, and so is left undetermined. In the original literature [29] the iz component has been used to determine g 4 up to an arbitrary symmetric divergence-free tensor. However, in the supergravity we are considering the presence of a (log z) 2 contribution to the bulk scalar field expansion means that X 2 appears without a derivative, which hence spoils this approach. In section 3.4 we will see that by imposing supersymmetry we obtain further constraints on the fields, and in particular this leads to an expression for g 4 in terms of other data.
Holographic renormalization
Having solved the bulk equations of motion to the relevant order, we are now in a position to holographically renormalize the Euclidean Romans theory. The bulk action (2.1) is divergent for an asymptotically locally hyperbolic solution, but can be rendered finite by the addition of appropriate local counterterms. The corresponding computations in Lorentzian signature have been carried out in [23]. We begin by taking the trace of the Einstein equation (2.6). Substituting the result together with (2.5) into the Euclidean action (2.1), we arrive at the bulk on-shell action Here Y 5 is the bulk five-manifold, with boundary ∂Y 5 = M 4 . In order to obtain the equations of motion (2.2)-(2.6) from the original bulk action (2.1) on a manifold with boundary, one has to add the Gibbons-Hawking term
JHEP12(2017)039
Here, more precisely, one cuts Y 5 off at some finite radial distance, or equivalently nonzero z > 0, and (M 4 , h) is the resulting four-manifold boundary, with trace of the second fundamental form being K. Recall from (2.14) that h ij = 1 z 2 g ij . The combined action I on-shell +I GH suffers from divergences as the conformal boundary is approached. To remove these divergences we use the standard method of holographic renormalization [28][29][30]. Namely, we introduce a small cut-off z = δ > 0, and expand all fields via the Fefferman-Graham expansion of section 2.2 to identify the divergences. These may be cancelled by adding local boundary counterterms. We find (2.43) Notice the somewhat unusual form of the logarithmic term for the scalar field X, but cf. the expansion (2.19). As is standard, we have written the counterterm action (2.43) covariantly in terms of the induced metric h ij on M 4 = ∂Y 5 . The total renormalized action is then which by construction is finite. The choice of local counterterms (2.43) defines a particular renormalization scheme, that is in some sense a "minimal scheme" in the case at hand. However, we are free to consider a non-minimal scheme where we add local counterterms to the action which remain finite as δ → 0. For the supergravity theory we are considering, the following are an independent set of finite counterterms that are both diffeomorphism and gauge invariant: 7 Here ζ 1 , . . . , ζ 8 are arbitrary constant coefficients, C ijkl denotes the Weyl tensor of the metric h ij , while the Euler scalar E and Pontryagin scalar P are respectively In particular, notice that for compact M 4 = ∂Y 5 without boundary, the second line of (2.45) are all topological invariants: they are proportional to the Euler number χ(M 4 ), the signature σ(M 4 ), and the Chern numbers M 4 c 1 (L) 2 , M 4 c 2 (V) respectively, where L and V denote the rank 1 and rank 2 complex vector bundles associated to the U(1) R and SU(2) R gauge bundles, respectively. In the real Euclidean theory in which we are working, recall that F = dA is globally exact (and purely imaginary), and in any case for the topological twist studied later in the paper we will have A | M 4 = 0. Being topological invariants, the variation of the action we shall compute in section 4 will be insensitive to the choice of constants ζ 5 , . . . , ζ 8 .
As emphasized in [31], in order to make quantitative comparisons in AdS/CFT it is important to match choices of renormalization schemes on the two sides. In particular, localization calculations in QFT make a (somewhat implicit) choice of scheme. In the case at hand, we note that in [32] a supersymmetric Rényi entropy, computed in field theory using localization, was successfully matched to a gravity calculation involving a supersymmetric black hole in the N = 4 + Romans theory. Here the supergravity action was computed using the minimal scheme. Our computation in section 4 will imply that this minimal scheme is indeed the correct one to compare to the topological twist of [4]. We shall make further comments on this, and the relation to recent papers [31,[33][34][35], in section 4.2.
Given the renormalized action we may compute the following VEVs: Here, as usual in AdS/CFT, the boundary fields g 0 ij = g ij , X 1 , A I i and a i act as sources for operators, and the expressions in (2.47) compute the vacuum expectation values of these operators. Similar expressions may also be written for the boundary fields b ± for B ± , but these will be zero for the topological twist of interest and play no role in the present paper. Using the above holographic renormalization we may write (2.47) as the following limits: where K ij is the second fundamental form of the cut-off hypersurface (M 4 , h ij ) and the B-field modified Bach tensor is (cf. (2.33)) Here * h denotes the Hodge duality operator for the metric h ij . A computation then gives the finite expressions Notice that these expressions contain a number of terms that are not determined, in terms of boundary data, by the Fefferman-Graham expansion of the bosonic equations of motion.
In particular the g 4 ij term in the stress-energy tensor T ij , the scalar X 2 that determines Ξ, and a I 2 , a 2 appearing in the SU(2) R and U(1) R current, respectively. The general holographic Ward identity corresponding to the first three variations of the action is given by equation (1.2). We will need the expressions (2.51)-(2.53) in section 4.
Supersymmetric solutions
In this section we study supersymmetric solutions to the Euclidean N = 4 + theory. We begin in section 3.1 by deriving the Killing spinor equations on the conformal boundary, starting from the bulk equations (2.7), (2.8). We precisely recover the Euclidean N = 2 conformal supergravity equations of [17]. In section 3.2 we then recall from [14] how the topological twist arises as a special solution to these Killing spinor equations, that exists on any Riemannian four-manifold (M 4 , g). We rephrase this in terms of the quaternionic Kähler structure that exists on any such manifold, involving (locally) a triplet of self-dual two-forms J I . Finally, in section 3.4 we expand solutions to the bulk spinor equations in a Fefferman-Graham-like expansion.
Boundary spinor equations
We begin by expanding the bulk Killing spinor equations (2.7), (2.8) to leading order near the conformal boundary at z = 0. We will consequently need the Fefferman-Graham expansion of an orthonormal frame for the metric (2.14), (2.15), together with the associated spin connection. The following is a choice of frame E µ µ for the metric (2.14): where e i i is a frame for the z-dependent metric g. The latter then has the expansion (2.15), but for the present subsection we shall only need that where e i i is a frame for the boundary metric g 0 = g. The non-zero components of the spin connection Ω νρ µ at this order are correspondingly where (ω (0) ) jk i denotes the boundary spin connection. The generators γμ of the Clifford algebra Cliff(5, 0) in this frame are chosen to obey It follows that γ 2 z = 1, and we may identify −γz with the boundary chirality operator. The bulk Killing spinor is then expanded as As in (2.11), we may further decompose the spinors ε, η into their projections ε ± , η ± onto the ±i eigenspaces of Γ 45 . At leading order in the z-component of the gravitino equation (2.7) one then finds − γzε ± = ±ε ± , (3.6) so that the Γ 45 eigenvalue of the leading order spinor ε is correlated with its boundary chirality. Similarly, at the next order in the gravitino equation one finds the opposite correlation for the spinor η: Recall that the boundary B-fields satisfy * 4 b ± = ∓b ± (see (2.26)). This together with the chirality conditions (3.6) implies that where · denotes the Clifford product (using the boundary frame). Using this, the leading order term in the i-component of the gravitino equation is then seen to be identically satisfied. The next order gives the pair of boundary Killing spinor equations:
JHEP12(2017)039
where we have defined the covariant derivative Here ∇ (0) i denotes the Levi-Civita spin connection of the boundary metric g 0 ij = g ij , and Turning to the bulk dilatino equation (2.8), the leading order term is in fact equivalent to the duality properties of b ± , given the chiralities of ε ± . At the next order we obtain the boundary dilatino equation The supersymmetry equations for four-dimensional Euclidean off-shell N = 2 conformal supergravity have been studied 8 in [17], and our equations (3.9), (3.11) precisely reproduce the equations in this reference. 9 Notice in particular that one can solve for the (conformal) spinor η by taking the trace of (3.9) with γ i , to obtain is the Dirac operator. Taking the covariant derivative of (3.9) and using the integrability condition for [D j ] then leads to the following form of the dilatino equation where R = R(g) is the Ricci scalar of the boundary metric. Requiring the boundary fields g ij , X 1 , a, A I , b ± to solve the spinor equations (3.9), (3.11) for ε ± in general imposes geometric constraints. Remarkably, in [17] it is shown that generically these conditions are equivalent to the boundary manifold (M 4 , g) admitting a conformal Killing vector. However, the topological twist background of [14] arises as a very degenerate case, where in fact (M 4 , g) may be an arbitrary Riemannian four-manifold. We turn to this case in the next subsection.
Topological twist
The topological twist background of [14] is obtained by setting 14) The boundary Killing spinor equation (3.9) immediately implies that ε + is covariantly constant D
JHEP12(2017)039
The dilatino equation, in the form (3.13), then fixes Recall that ε + is a doublet of positive chirality spinors: the Pauli matrices σ I act on these doublet indices, while the Clifford matrices γī act on the spinor indices. We may write out the covariant derivative in (3.15) more explicitly by first introducing the following explicit Hermitian representation Hereā = 1, 2, 3. Since γzε + = −ε + , we may identify each of the two spinors in the doublet ε + with a two-component spinor, acted on by the second 2 × 2 block. With these choices (3.15) reads where ηā ij are the self-dual 't Hooft symbols, and recall that (ω (0) ) jk i is the spin connection for the boundary metric g ij . One may then solve (3.18) by taking Here i = 1, 2 labels the doublet indices, while α = 1, 2 labels the positive chirality spinor indices, and notice that the frame indexā = 1, 2, 3 is identified with the gauge indices I = 1, 2, 3. It is straightforward to check that (3.19) solves (3.18), for any constant c.
The SU(2) R gauge field A I given by (3.19) is precisely the right-handed part of the spin connection, where recall that Spin(4) = SU(2) − × SU(2) + . Thus the SU(2) R gauge bundle is identified with SU(2) + . More invariantly, ε + is a section of S + ⊗ V, where S + denotes the positive chirality spinor bundle over M 4 , while V is the rank 2 complex vector bundle for which A I is an associated SU(2) connection. A priori this makes sense globally only when M 4 is a spin manifold, when S + and V both exist as genuine vector bundles. However, the topological twist (3.19) identifies V with S + , and their tensor product then always exists globally, even when M 4 is not spin. 10 This topological construction of a spin-type bundle on a manifold which is not necessarily spin was first suggested in [38], and is sometimes referred to as a Spin G structure, where here the group G = SU (2). Perhaps more familiar are Spin c structures, where instead G = U(1). (For example, this arises in Seiberg-Witten theory.) It will be convenient later to introduce the triplet of self-dual two-forms 20) JHEP12 (2017)039 where recall that e i i is the boundary frame for g ij . More explicitly, these read J 1 = e 2 ∧ e 3 + e 1 ∧ e 4 , J 2 = e 3 ∧ e 1 + e 2 ∧ e 4 , J 3 = e 1 ∧ e 2 + e 3 ∧ e 4 . (3.21) Of course, in general a frame e i i is only defined locally on M 4 , in an appropriate open set, and likewise the J I in (3.21) are then well-defined forms only locally. More globally, local frames are patched together with SO(4). The spin cover is Spin(4) ∼ = SU(2) − × SU(2) + , and the self-dual/anti-self-dual two-forms are precisely the representations associated to SO(3) ± = SU(2) ± /Z 2 . In particular, the {J I } rotate as a 3-vector under SO(3) + ⊂ SO(4). In this sense the J I in general don't exist individually as global two-forms on M 4 , but instead as a triplet of forms that rotate appropriately. We comment further on this below.
One can also write the J I in terms of spinor bilinears. Recall from the end of section 2.1 that the bulk spinors satisfy a symplectic Majorana reality condition. In particular the boundary spinor ε + satisfies where recall that C is the charge conjugation matrix for the spacetime Clifford algebra. In the explicit basis (3.17) we may take We then define the boundary spinor This has square normχχ = c 2 , where the bar denotes Hermitian conjugate, and χ of course has positive chirality, −γzχ = χ. One easily checks that where χ c ≡ C χ * . From the original definition (3.20), the J I inherit a number of algebraic identities from those for the 't Hooft symbols. For example, Using the metric to raise an index, one obtains a triplet (I I ) i j ≡ g ik (J I ) kj of endomorphisms of the tangent bundle of M 4 . These satisfy the quaternionic algebra (3.28)
JHEP12(2017)039
One also finds that where the R-symmetry gauge field A I here is precisely the right-handed spin connection given by the topological twist (3.19). Notice that we may correspondingly write the curvature as where R ijkl is the boundary Riemann tensor.
In general a quaternionic Kähler manifold is a Riemannian manifold of dimension 4n with holonomy Sp(n) · Sp(1) ⊂ SO(4n). 11 Such manifolds admit, locally, a triplet of skew endomorphisms I I of the tangent bundle satisfying (3.28), for which the corresponding triplet of two-forms J I satisfy (3.29). Here A I is the Riemannian connection corresponding to the Sp(1) part of this holonomy group. For n = 1 notice that Sp(1) · Sp(1) = SO(4), and such a structure exists on any Riemannian four-manifold (M 4 , g) (as we have just seen). Crucially, the two-forms (3.21) are not in general defined globally, but are (in our language) twisted by the R-symmetry gauge field, transforming as a vector under SO(3) R = SU(2) R /Z 2 . As such, they don't define a reduction of the structure group to SU(2) − , as a global set of such forms would do. Indeed, the globally defined tensor on a quaternionic Kähler manifold is the four-form Ψ ≡ J I ∧ J I (summed over I), and in four dimensions (n = 1) this is proportional to the volume form. The stabiliser of Ψ is Sp(n) · Sp(1), which is SO(4) when n = 1.
In dimensions n ≥ 2 irreducible quaternionic Kähler manifolds are automatically Einstein. Some authors choose to define a quaternionic Kähler four-manifold to be an Einstein manifold with self-dual Weyl tensor, but we shall not use this terminology.
U(1) R current
Before continuing to expand the spinor equations into the bulk, in this subsection we pause briefly to consider the VEV of the U(1) R current given by (2.54). In the topological twist background equation, (2.28) gives a 1 = 0, so that J = −a 2 /κ 2 5 . On the other hand, from (2.29) we obtain the U(1) R anomaly equation Indeed, in section 2.1 we noted that we are studying gravitational saddle points in the real Euclidean Romans theory, where the U(1) R gauge field A is a (purely imaginary) global one-form. Related to this, the U(1) R symmetry effectively becomes an SO(1, 1) R symmetry after Wick rotation, as also emphasized in [17] (see also [2]
Supersymmetric expansion
In this section we continue to expand the bulk spinor equations to higher order in z.
From this we extract further information about some of the fields which are not fixed, in terms of boundary data, by the bosonic equations of motion. We will continue to use the boundary conditions appropriate to the topological twist. In particular we note that the boundary B-fields b ± = 0 in this case, and that setting the bulk B ± = 0 is a consistent truncation of the Euclidean N = 4 + theory. Moreover, in this case the bulk spinors ǫ ± satisfy decoupled equations, and since the leading order term ε − = 0 it is then also consistent to set the bulk ǫ − = 0. We henceforth work in this truncated theory. This subsection is somewhat technical. All of the relevant formulas that we need in section 4 are in any case summarized in that section, and a reader uninterested in the details may safely skip the present subsection. The frame, spin connection and spinor expansions beyond the leading order given in section 3.1 will be needed, so we first give details of these. The frame expansion is 12 A little less laboriously we can instead note that F I is the curvature of the bundle of self-dual twoforms Λ + 2 M4, and the integral of the right hand side of (3.31) is proportional to the first Pontryagin class p1(Λ + 2 M4) = 2χ(M4) + 3σ(M4). 13 In passing we note that (3.34) corresponds (with an appropriate choice of orientation) to equality in the Hitchin-Thorpe inequality. In particular the only Einstein manifolds satisfying this condition are the flat torus, a K3 surface, or a quotient thereof [40]. A non-example is S 4 , for which 2χ(S 4 ) + 3σ(S 4 ) = 4. On the other hand, for a complex surface (3.34) is equivalent to M 4 c1 ∧ c1 = 0, where c1 = c1(M4) is the first Chern class of the holomorphic tangent bundle (the anti-canonical class).
JHEP12(2017)039
where in particular e i i is a frame for the boundary metric. The additional spin connection components we will need are The bulk spinor has ǫ − = 0 in our truncated theory, and we thus henceforth drop the superscript on ǫ + → ǫ, ε + → ε (we hope this abuse of notation won't lead to any confusion). The bulk spinor then has the following expansion where ε is constant with positive chirality under −γz. As in equation (3.22) the bulk spinor ǫ satisfies the reality condition ǫ c ≡ iσ 2 C ǫ * = ǫ . (3.38) We start by analysing the bulk dilatino equation. At lowest order we find which is satisfied identically, where we have used (3.16) and (3.30). At the next order we find This is effectively a matrix equation, of which we shall see many more. Components of such equations may be extracted by first noting that in the notation of section 3.2. For example, one can then take the first component of (3.40), and applyχγ j on the left. Taking the real part, and using the definitions (3.26) of J I in terms of spinor bilinears, one obtains We shall make use of similar manipulations throughout this subsection. Focusing on (3.42), recall that a I 1 is already fixed in terms of the SU(2) covariant divergence of F I , via equation (2.28). The latter reads (a I 1 ) i = 1 2 D j F I ij . Starting from this and (3.30), and using the identity α pq J I m p J I n q = α mn − 2( * α) mn , where α pq is any two-form, one can show that (3.42) is an identity. We may then differentiate (3.42) and, upon using the quaternionic Kähler equation (3.29), we obtain This relation appears frequently hereafter. At the next order in the dilatino equation we find an equation involving several undetermined fields:
JHEP12(2017)039
from which we similarly extract From this expression, taking a covariant derivative and symmetrizing indices gives At higher order still we have As ε has positive chirality we can act with P − = 1 2 (1 + γz) to deduce that ε 3 also has positive chirality. It then follows that where we have used (3.43). This expression for X 3 is equivalent to that in (2.30), for the topological twist. Finally, at order O(z 7/2 ) we have Here e i i is the inverse frame to e i i , with e i i and (e (2) ) i i being coefficients in its expansion, precisely as in (3.35). We have also defined f 2 = da 2 . Since ε 3 is so far undetermined, we cannot yet extract an expression for X 4 . This concludes the expansion of the bulk dilatino equation.
Turning next to the bulk gravitino equation, at lowest order in the z direction we find, after using the fact that ε 3 has positive chirality, that As a metric defines the frame only up to an arbitrary local SO(4) rotation, it is convenient to gauge fix this arbitrariness. A consistent gauge choice is (e (2) )ī i = 1 2 (g 2 )īj ej i and (e (2) ) ī i = − 1 2 e ī j (g 2 )jī, where recall that g 2 is fixed in terms of the boundary Schouten tensor via (2.34). This then implies that and, being symmetric, their contraction with any anti-symmetric tensor automatically vanishes. Consequently, this gauge choice reduces the relation between the spinors ε and ε 3 to simply
JHEP12(2017)039
Having found this relation we may substitute for ε 3 into the right hand side of (3.49), extract X 4 and then substitute for g 2 , X 1 , X 3 and F I to obtain (3.53) Here strictly speaking we have taken the real part of this equation, where the term involving f 2 is purely imaginary, and thus doesn't appear. Using the trace of (3.46), together with several other equations derived so far, one can check that the expression (3.53) for X 4 agrees with the expression (2.31), obtained from the equations of motion. At the next orders we find We could continue and analyse higher order terms in this z component of the gravitino equation, but the subsequent expressions are not required, nor particularly enlightening, and so we stop here. The remaining equation to study is the i direction of the gravitino equation. Crucially this involves the spin connection components Ω i zi , which introduce the metric expansion fields from (2.15). Of course, the leading order equation is satisfied by construction. Remarkably, at the next order we find a non-trivial equation which is also identically satisfied given the chirality of ε 3 and the algebraic properties of the Riemann tensor. At the following order we find another condition onε 5 : which, used in conjunction with (3.55), allows us to determine γ zε 5 =ε 5 ,ε 5 = − 1 24 dR · ε . (3.57) We now substituteε 5 into equation (3.54): Acting on this last equation with γz, and taking the difference, implies that ε 5 is a negative chirality spinor: γzε 5 = ε 5 . We thus find At the next order we begin to see the metric fields appearing: (3.60) Using the chiral projector P − again we see thatε 7 has positive chirality, and we may extract h 0 :
JHEP12(2017)039
This agrees with the expression h 0 ij = − 1 2 g ij X 2 1 , given by equation (2.39), derived from the expansion of the bosonic field equations. The next order gives As before, we can show thatε 7 has positive chirality and hence drops out of (3.62). Now using the definition ofε 5 in (3.57) allows us to write everything acting on the spinor ε. After using the intermediate result , (3.63) and substituting for the known expressions, we can then read off h 1 ij : Once again, we have found another expression for something we have already derived: h 1 ij is also given by equation (2.40). However, in this instance the equality of the two expressions (3.64) and (2.40) is non-trivial. It is equivalent to the equation The first line quite remarkably is known to be zero for any Riemannian four-manifold, and is called Berger's identity [24]. One can also show that the second line is equal to zero, which amounts to an algebraic identity that holds for any tensor sharing the algebraic symmetries of the Riemann tensor.
Finally, at the last order we find 14 (2) ) j j + (e (2) ) i i e j j g 2 ij γjε .
(3.66) 14 Of course, knowing h 1 ij we could write an expression for g 4 ij alone, but it is only the combination 4g 4 ij + h 1 ij which we shall need in the next section.
JHEP12(2017)039
Again there is a positive chirality condition on ε 7 which removes it from the above equation.
Using the many intermediate results we have derived, we then find
Metric independence
Our aim in this section is to show that, for any supersymmetric asymptotically locally hyperbolic solution to the Euclidean N = 4 + supergravity theory, with the topologically twisted boundary conditions on an arbitrary Riemannian four-manifold (M 4 , g), the variation (1.2) of the holographically renormalized action is identically zero. As explained in the introduction, this implies that the right hand side of (1.1) is independent of the choice of metric g, precisely as expected for the holographic dual of a topological QFT. We find that this is indeed the case, using the minimal holographic renormalization scheme described in section 2.3. We comment further on this at the end of section 4.2.
Variation of the action
As discussed in section 3.2, the Donaldson-Witten topological twist corresponds to the following boundary conditions on the supergravity fields on M 4 : Here the boundary Riemannian metric g ij on M 4 is arbitrary, with ω jk i being the spin connection, R being the Ricci scalar curvature, and the triplet of self-dual two-forms J I being given by (3.21). The holographic Ward identity for the variation of the renormalized action (2.44) with respect to general variations of the non-zero boundary fields is It is worth pausing to consider carefully why this equation holds. A variation of the boundary data on M 4 will induce a corresponding variation of the bulk solution that fills it. However, we are evaluating the action on a solution to the equations of motion, and by definition these are stationary points of the bulk action. Thus the resulting variation of the on-shell action is necessarily a boundary term, and this is the expression on the right hand side of (4.2). This argument requires that the equations of motion are solved everywhere in the interior of Y 5 : if the latter has internal boundaries, or singularities, the above in general breaks down, and one will encounter additional terms around these boundaries/singularities on the right hand side of (4.2).
JHEP12(2017)039
For the topological twist all boundary fields are determined by the metric g ij . Since X 1 = − 1 12 R, to compute δX 1 we need the variation of the Ricci scalar: with the variation of the Christoffel symbols being After integrating by parts twice we obtain where vol 4 ≡ √ det g d 4 x is the Riemannian volume form on (M 4 , g), and all geometric quantities appearing are computed using the boundary metric g ij . Substituting the value of Ξ from (2.52) leads to where the total derivative term is For δA I i we first need the variation of the spin connection. After a short calculation we have δω i jk = 1 2 e lj e mk (∇ m δg il − ∇ l δg im ) . Thus After integrating by parts, the SU(2) R current contribution is hence where we have substituted for the SU(2) R current using (2.53), and used the quaternionic Kähler identity (3.29). The object in square brackets is a tensor with indices ij: only the symmetric part contributes. The total derivative term is It remains to evaluate the stress-energy tensor contribution (2.51) and combine it with (4.6) and (4.10). Doing so leads to
JHEP12(2017)039
where the total derivative term is and (4.14) Here the first two lines come from the stress-energy tensor (2.51), while the last line combines (4.6) and (4.10). Provided M 4 is a closed manifold, without boundary, the integral of the total derivative term is zero, and we have simply The tensor T ij is thus an effective stress-energy tensor, for variations of the renormalized on-shell action with respect to the boundary metric, all boundary data being determined by this choice of metric. Our claim that the on-shell action is invariant under an arbitrary metric deformation δg ij is thus equivalent to the statement that T ij ≡ 0, for every Riemannian four-manifold. Remarkably, despite there being several undetermined quantities in (4.14), using the results of sections 2.3 and 3.4 we will show that indeed T ij ≡ 0 in the next subsection.
Proof that δS/δg ij = 0
We begin by substituting expressions from section 2.2 into (4.14), which recall follow from the Fefferman-Graham expansion of the bosonic equations of motion. In particular we substitute for ∇ 2 X 2 using equation (2.31), as well as various metric quantities, except for the combination 4g 4 ij + h 1 ij . With the topological twist boundary conditions (4.1) this leads to the expression In particular we have used the identity in deriving (4.16).
JHEP12(2017)039
The equations of motion, or equivalently supersymmetry conditions, determine On the other hand, in section 3.4 the expansion of the supersymmetry conditions led to the expression (3.67), which we repeat here: Substituting into (4.16), after several immediate cancellations we are left with Using the expression together with the contracted second Bianchi identity, we find that Here, remarkably, each of the three lines vanishes separately. The first line is zero using again (4.17) and the contracted second Bianchi identity, whilst the terms in the second line combine to give zero after using the self-duality property of the J I tensors to remove the Hodge dual acting on the field strength Da I 2 . The final line is zero after applying the Ricci identity for a rank two covariant tensor, followed by the first Bianchi identity and using the symmetry of the summed indices.
We emphasize again that this proof that δS/δg ij = 0 uses the minimal holographic renormalization scheme defined in section 2.3. Up to finite counterterms in (2.45) that are topological invariants, which have identically zero variations, another choice of scheme would spoil the above result. Another important comment is that the original path integral JHEP12(2017)039 arguments in [4] are essentially classical (see footnote 10 of [4]). In particular there might have been an anomaly, implying that the partition function (and other correlation functions) are not invariant under arbitrary metric deformations. In this case, the topological twist would not have led to a TQFT. This might seem like a strange comment, given that the topologically twisted N = 2 Yang-Mills theory of [4] at least formally reproduces Donaldson theory, which of course certainly does rigorously define diffeomorphism invariants of M 4 . However, it has recently been argued that precisely such an anomaly exists for four-dimensional rigid N = 1 supersymmetry [34,35]. The computations in these papers are in fact holographic, and rely on the fact that in AdS/CFT the semi-classical gravity computation is a fully quantum computation on the QFT side, including any potential anomalies. Specifically, it is argued that there is an anomalous transformation of the supercurrent under rigid supersymmetry on the conformal boundary, implying that the partition function is not invariant under certain metric deformations that are classically Q-exact. These particular anomalous transformations were first discovered in [31,33], via essentially the same computation we have followed in this paper, although this was not interpreted as an anomaly in [31,33]. It remains an open problem to directly derive this anomalous transformation from the QFT in a new minimal supergravity background. Returning to our present problem, the QFT is in any case coupled to an N = 2 conformal supergravity background, and for the N = 2 topological twist we find no anomaly. In particular our topologically twisted supergravity theory, formally at least, defines a topological theory. We discuss this further in section 5.3 and section 6.
Geometric reformulation
In this section we present a geometric reformulation of the bulk supersymmetry equations. In section 5.1 we describe how (twisted) differential forms built out of bilinears in the bulk spinor define a twisted Sp(1) structure on Y 5 , and in section 5.2 we then derive a set of first order differential constraints on this structure. On the conformal boundary this restricts to the quaternionic Kähler structure that exists on any oriented Riemannian four-manifold (M 4 , g), described in section 3.2. We also discuss some general aspects of the filling problem in section 5.3.
Twisted Sp(1) structure
Recall from section 2.1 that the bulk spinor ǫ of the Romans N = 4 + theory is originally a quadruplet of spinors. These split into two doublets ǫ ± , with eigenvalues ±i under Γ 45 (see equation (2.11)). Beginning in section 3.2, we worked in a truncated theory in which B ± = 0 and ǫ − = 0. We may then define where ζ is a spinor on Y 5 , and recall that ζ c ≡ C ζ * . Equation (5.1) is the solution to the symplectic Majorana condition (ǫ + ) c = ǫ + . More globally, and as on the conformal boundary M 4 , the spinor ǫ + in (5.1) is a Spin G spinor, where G = SU(2) R -see section 3.2.
JHEP12(2017)039
With this notation we may define the following (local) differential forms where in our Hermitian basis of Clifford matrices recall that a bar denotes Hermitian conjugate. There are a number of global comments to make. First, as in the discussion in section 3.2, the fact that ζ is globally a twisted spinor, rather than a spinor, means that (5.2) in general only locally defines an SU(2) ∼ = Sp(1) structure. 15 More globally, the J I are twisted via the SU(2) R symmetry, transforming as a triplet. We shall call this a twisted Sp(1) structure. Another comment is that in any case the structure is well-defined only where ζ = 0. In general there may be solutions to the spinor equations where ζ = 0 on some locus. We should hence more precisely define Y where χ is the boundary spinor defined in section 3.2. In particular for the topological twist this is constant, with constant square normχχ = c 2 (see equations (3.24), (3.25)). Without loss of generality we henceforth set c = 1, so that In particular notice that ζ = 0 near to the conformal boundary at z = 0.
Differential system
Starting from the bulk Killing spinor equations (2.7), (2.8) one can derive a system of differential equations for the twisted Sp(1) structure (5.2). In the notation (5.1) the spinor equations read 15 A general discussion of global Sp(1) structures on five-manifolds may be found in [41].
JHEP12(2017)039
As in section 2.1, it will be convenient to introduce the real one-form Using these equations, a standard calculation 16 leads to together with the triplet of equations Here the Hodge dual is constructed from the volume form vol 5 = −K ∧ vol 4 , where vol 4 ≡ 1 2 J I ∧J I (no sum over I). The sign here is chosen to match our earlier choice of orientation, via (2.17), as we shall see shortly.
We may read the first equation (5.7) as determining the one-form C in terms of geometric data and the function X: In particular, the associated flux is then Recall that in the original Lorentzian theory A is a U(1) R gauge field. In the real Euclidean section we have defined C = iA, which is a real one-form, but there is then a residual part of the (complexified) gauge symmetry C → C − dλ, where λ is a global real function. The fields transform as follows: with everything else invariant. In particular it is immediate to see that (5.9), (5.11) are invariant under these gauge transformations. In our boundary value problem recall that we fixed C | M 4 = 0, and in order to preserve this gauge condition on the conformal boundary one should restrict to gauge transformations that vanish there, so that λ | M 4 = 0. With this caveat, one might use this gauge freedom to effectively remove one of the functional degrees of freedom. 16 For example, see [19].
JHEP12(2017)039
Let us look at the asymptotic form of the differential conditions near the conformal boundary at z = 0. Recalling the Fefferman-Graham expansion of the fields (2.19)-(2.21), together with the topological twist boundary conditions (4.1), we have Here recall that R is the boundary Ricci scalar, the boundary gauge field is where ω jk i is the boundary spin connection, R mnij is the boundary Riemann tensor, and J I are the boundary triplet of self-dual two-forms. The one-form ia 2 is real. Using also (5.4), equation (5.7) then implies that Recall that in section 3.2 we defined the triplet of boundary almost complex structures (I I ) i j ≡ g ik (J I ) kj . If we define the boundary (almost) Ricci two-forms Here I I (η) i = (I I ) j i η j for a one-form η tangent to the boundary. It is interesting to note that the O(1) terms in J I above may also be written as 1 12 R J I − 1 2 ρ I = (g 2 • J I ), where recall from equation (2.34) that g 2 is (minus) the Schouten tensor of the conformal boundary. From (5.11) we hence read off the leading order the boundary equation Equation (5.18) follows from taking the skew symmetric part of (3.29). In fact since the exterior derivatives of the boundary SU(2) structure J I completely determine the intrinsic torsion (this is true for an SU(n) structure in real dimension 2n [42]), it follows that (5.18) also implies (3.29). We may always choose a frame E µ µ for the bulk metric on Y 5 such that
JHEP12(2017)039
In particular (5.15) identifies E 5 ∼ dz/z to leading order, and the sign for K in (5.19) follows since −γzχ = χ, where E z = dz/z. The volume form is vol 5 = E 12345 . Notice that the expansions (5.15), (5.17) imply that in general we may not identify E µ µ near the conformal boundary with the Fefferman-Graham frame E µ µ in (3.1), except to leading order.
Filling problem
As explained in the introduction, given a Riemannian-four manifold (M 4 , g) as a fixed conformal boundary, at least to a zeroth order approximation in AdS/CFT one wants to find the least action supersymmetric solution to the five-dimensional N = 4 + supergravity theory, with this boundary data. Such a solution will be the dominant saddle point on the right hand side of (1.1). In this subsection we make some comments on this problem, with further comments in section 6.1.
As we have seen in the previous subsection, supersymmetric solutions on Y 5 are characterized geometrically in terms of a set of first order differential equations (5.9), (5.11) for a certain twisted Sp(1) structure. In particular there is a triplet of twisted two-forms J I , I = 1, 2, 3, which locally at the conformal boundary restrict to an orthonormal set of self-dual two-forms on (M 4 , g). The differential equations become tautological on the boundary, and are equivalent to the fact that every oriented Riemannian four-manifold has a quaternionic Kähler structure, i.e. has holonomy group Sp(1) · Sp(1) ∼ = SO(4). This differential system on Y 5 , regarded as extending that on (M 4 , g), clearly deserves closer study. In particular, these are necessary conditions for a solution, but one would also like to know whether they are sufficient. It should also be possible to rewrite the renormalized supergravity action (2.44) in terms of this geometric data. The computation in section 4 implies that, given any one-parameter family of metrics on M 4 , the action of any family of fillings of the boundary is independent of the parameter. What type of invariant is this? A priori it depends on the choice of Y 5 filling M 4 , and on the twisted Sp(1) structure on Y 5 .
An important question is what are the global constraints on Y 5 ? As mentioned in the introduction, topologically a smooth filling Y 5 of M 4 exists if and only if the signature σ(M 4 ) = 0. Moreover, as explained in section 6.1, for solutions embedded in string theory one also needs these manifolds to be spin. 17 This restriction would seem to rule out many interesting four-manifolds. 18 However, as also mentioned in the introduction, requiring Y 5 to be smooth is almost certainly too strong. Already from AdS/CFT in other contexts, it is clear that the dominant saddle point contribution can be singular, and one might anticipate that this is somewhat generic, at least for general M 4 . Perhaps the appropriate question is then: what are the relevant singularities of Y 5 , for a given M 4 ? 19 Mathematically one would need control over existence and uniqueness of the differential equations for the twisted Sp(1) structure, for appropriate Y 5 (with singularities/appropriate internal boundary conditions) filling M 4 . However, one might also anticipate that the supergravity action (2.44) could 17 The relevant spin bordism group is Ω Spin 4 ∼ = Z, generated by a K3 surface, where the map to the integers is σ(M4)/16. 18 Although it leaves, for example, M4 = S 1 × M3, for any oriented three-manifold M3, and products of Riemann surfaces. 19 We thank S. Gukov for discussions on this, and indeed for posing this precise question!
JHEP12(2017)039
be evaluated without knowing the detailed form of the solution, but instead in terms of appropriate global data, and perhaps local data associated to singularities. Notice that one constraint on such singularities/internal boundaries is that they do not contribute to the variation of the action (4.2) -see the discussion after this equation. 20 Less ambitiously, one might also try to find explicit solutions; for example, via symmetry reduction so that the equations reduce to coupled ODEs. An obvious case is solutions with Y 5 = S 1 × B 4 , where B 4 is a four-ball so that ∂Y 5 = M 4 = S 1 × S 3 , and seek solutions invariant under U(1) × SU(2) (the latter acting on the left on S 3 ∼ = SU (2)).
Finally, the present problem may be contrasted to the general hyperbolic filling problem described in [43]. Here one also begins with an arbitrary Riemannian (M 4 , g), which is a conformal boundary, but one instead asks for the filling to be an Einstein metric of negative curvature. This problem is still quite poorly understood: there are in general obstructions and non-uniqueness, and one should at least impose that g has a conformal representative with positive scalar curvature [44] (physically, so that the CFT is stable). The geometric problem in the present paper is likely to be much better behaved: the equations are first order, not second order, and the solutions should be dual to a TQFT.
Discussion
We conclude with a discussion of "topological AdS/CFT" in section 6.1, followed by various extensions and generalizations in section 6.2.
Topological AdS/CFT
An application of the ideas developed in this paper would be to a topologically twisted version of the AdS/CFT correspondence. To make quantitative comparisons between calculations on the two sides, as in (1.1) (appropriately interpreted), the construction needs embedding in string theory. This is straightforward: the Romans theory is a consistent truncation of both Type IIB supergravity on S 5 [25], and also of eleven-dimensional supergravity on N 6 [26], where N 6 are the geometries classified by Lin-Lunin-Maldacena [45]. This means that any solution to the five-dimensional Romans theory uplifts (at least locally -see below) to a string/M-theory solution.
In order to be concrete, let us focus on the case of N = 4 Yang-Mills theory. Applying the Donaldson-Witten twist leads to the half-twisted theory referred to in the introduction. For general gauge group G the path integral localizes [46,47] onto solutions to a non-Abelian [48] version of the Seiberg-Witten equations, in which the spinor field is in the adjoint representation of G . For G = SU(N ), AdS/CFT should relate the large N limit of this theory to an appropriate class of solutions to the Romans N = 4 + theory in five dimensions, uplifted on S 5 to give full solutions of Type IIB string theory. This is where the restriction that M 4 is spin enters: if M 4 is not spin then the background SU(2) Rsymmetry gauge field we turn on is not globally a connection on an SU(2) bundle over JHEP12(2017)039 M 4 . On the other hand, the Type IIB solution is an S 5 fibration over the filling Y 5 , where S 5 ⊂ C 2 ⊕ C, and SU(2) acts on C 2 in the fundamental representation. Thus if M 4 is not spin, this associated bundle is not well-defined. This is also directly visible in the TQFT: for the half-twist of N = 4 Yang-Mills there are still spinors in the twisted theory, which only make sense if M 4 is spin.
There is some discussion of the half-twisted N = 4 theory for general gauge group G in [49]. In particular the (virtual) dimension of the the relevant non-Abelian monopole moduli space M may be computed using index theory, leading to Because of the associated fermion zero modes, the partition function of the theory vanishes unless the right hand side of (6.1) is also zero. We have already seen precisely this condition in the holographic dual set-up, namely equation (3.34). In the gravity context this followed from A being a global one-form, and then integrating the divergence of the VEV of the U(1) R current (the U(1) R anomaly) over a compact M 4 without boundary, as in (3.33). In fact the two are directly related, since the virtual dimension (6.1) of M computed in field theory is proportional to this integrated U(1) R anomaly. In the current holographic set-up, we can see this explicitly by first noting that for the large N limit of the G = SU(N ) halftwisted N = 4 Yang-Mills theory, a standard AdS/CFT formula fixes the dual effective five-dimensional Newton constant as This fixes the overall normalization of the supergravity action. In the large N limit, using (3.33) we may then write in terms of the integrated (holographic) U(1) R anomaly. Another important observation is that (6.1) is independent of the topology of the gauge bundle over M 4 , unlike the corresponding case for Donaldson theory (pure N = 2 Yang-Mills with gauge group G ). Because of this, all choices of gauge bundle contribute to the partition function at the same time. The left hand side of (1.1) then needs appropriately interpreting for such twists of four-dimensional N = 2 SCFTs, as taken at face value it may be divergent. There is a standard way to deal with this, 21 namely to refine the partition function via the U(1) R charge. For example, this is discussed at the end of section 2 of [50], and in [51]. This should play an important role in making sense also of the right hand side of (1.1), in addition to the comments on this in section 5.3. For example, a very concrete case mentioned in the latter subsection is M 4 = S 1 ×S 3 . Here the refined partition function is closely related to the Coulomb branch index, as explained in [52]. One might then try to reproduce this from a dual supergravity solution for which Y 5 = S 1 × B 4 , with 21 We are again grateful to S. Gukov for pointing this out.
JHEP12(2017)039
∂Y 5 = S 1 × S 3 . More generally, for a four-manifold S 1 × M 3 with product metric both E and P vanish, and the holographic U(1) R current is conserved, as can be seen from (3.32). The associated conserved holographic R-charge might then provide a natural holographic correspondent to the refinement of the partition function for the twisted four-dimensional SCFT. The AdS/CFT relation (1.1) in particular implies that the logarithm of the TQFT partition function, appropriately refined as above, scales as N 2 as N → ∞, when it is non-zero. On the other hand, when the right hand side of (6.1) is positive, one obtains non-zero invariants in the TQFT by inserting appropriate Q-exact operators into the path integral. We briefly discuss the dual holographic computation in section 6.2. In particular, such insertions will change the boundary conditions on supergravity fields we have imposed in this paper.
As far as we are aware, computations of topological observables in the half-twisted N = 4 theory, for general G = SU(N ), have not been done explicitly. However, for G = SU(2) the partition function and topological correlation functions have been computed explicitly for simply-connected spin four-manifolds of simple type [47]. This is done by giving masses, explicitly breaking N = 4 to N = 2, leading to an N = 2 gauge theory with a massive adjoint hypermultiplet, a twisted version of the N = 2 * theory. The twisted theory is still topological, and the relevant observables are written in terms of Seiberg-Witten invariants using the methods of [53]. Observables for the original theory are then identified with the massless limit of these formulae (when this makes sense), although the validity of this assertion is not completely clear. In any case, to compare to the holographic construction in this paper one should compute the large N limit for gauge group G = SU(N ). We note that an analogous large N limit of Donaldson invariants (for pure N = 2 SU(N ) Yang-Mills) has been computed in [9]. Unlike the formula (6.1), here the dimension of the moduli space of instantons depends on the topology of the gauge bundle. One can then choose this bundle in such a way that dim M = 0. The partition function is a certain signed count of the points that make up M, and the large N limit was computed for a certain class of four-manifolds in [9]. 22 We conclude this subsection by noting that similar remarks apply to twists of N = 2 SCFTs with M-theory duals. Indeed, an important restriction on the class of N = 2 gauge theories to which this holographic description applies is that they are conformal theories. 23 A large number of examples arise as class S theories [54], obtained by wrapping M5-branes over punctured Riemann surfaces, for which the gravity dual was found in [55] using the construction of [45]. Romans solutions uplift on the corresponding internal spaces N 6 to solutions of M-theory [26]. At the level of the five-dimensional theory, all that changes is the formula (6.2) for the effective Newton constant, which in general reads [56] 1 κ 2 5 = a π 2 , (6.4)
JHEP12(2017)039
where a is the a central charge. In the supergravity limit recall that a = c. For the abovementioned M5-brane theories the central charge scales with N 3 as N → ∞. Indeed, the partition function will a priori depend on both the choice of N = 2 SCFT that is being twisted, and also on the four-manifold M 4 on which it is defined. The choice of theory corresponds to the choice of internal space in the uplifting to ten or eleven dimensions. The structure of the dual supergravity solution as a fibration of the internal space over the spacetime filling of M 4 then implies that the large N limits of the partition functions should also factorize. That is, the dependence on the choice of theory should only be visible via the central charge a, which via (6.4) fixes the overall normalization of the supergravity action. On the other hand, the dependence on the choice of M 4 is then captured by the effective five-dimensional Romans theory we have described. 24
Generalizations
We have already discussed a number of open problems and directions for future work. Here we briefly mention some further generalizations: • Perhaps the most immediate generalization of the computations in this paper would be to the so-called Ω-background of [57]. Here (M 4 , g, ξ) is an arbitrary Riemannian four-manifold, equipped with a Killing vector field ξ. As for the pure topological twist, this geometry also arises by coupling an N = 2 gauge theory to a certain background of N = 2 conformal supergravity, and is briefly mentioned at the end of section 3 of [17]. The non-zero Killing vector ξ requires turning on a boundary B-field: specifically one needs to take b − (or b + ) proportional to the self-dual (or anti-self-dual) part of the two-form dξ ♭ , where ξ ♭ is the Killing one-form dual to ξ. Correspondingly, both boundary spinor doublets ε + and ε − are now non-zero, and one needs to work with the full Romans theory, rather than the truncated version with B ± = 0 we used from section 3.2 onwards. Nevertheless, the computations should not be too much more involved than those in the present paper. One expects the supergravity action now to depend on the choice of Killing vector ξ on M 4 , but otherwise not on the metric. One should thus look at metric deformations g ij → g ij + δg ij , where L ξ δg ij = 0.
• As mentioned in the introduction, there are three inequivalent topological twists of N = 4 Yang-Mills. The half-twist, relevant to this paper, was discussed in the previous subsection. The other two twists are the Vafa-Witten twist [11], and the twist studied by Kapustin-Witten in [12]. In particular in the former theory the only non-trivial observable is the partition function, and this has been studied for gauge group G = SU(N ) in [58]. These twists require the larger SU(4) R R-symmetry of the N = 4 theory, meaning for the holographic dual one needs to start with a
JHEP12(2017)039
Euclidean form of N = 8 gauged supergravity theory. Optimistically, one might hope to embed within the SU(4) ∼ SO(6) truncation of the latter theory studied in [59], which is a consistent truncation of Type IIB supergravity on S 5 , and contains the five-dimensional Romans N = 4 + theory (with zero B-field) as a further truncation.
• Topological twists exist in a variety of dimensions. In three dimensions the Rsymmetry group is Spin(N ). The analogous amount of supersymmetry to that studied in the present paper is N = 4, leading to a Spin(4) = SU(2) × SU(2) R-symmetry group. On the other hand Spin(3) = SU (2), and this leads to two inequivalent threedimensional N = 4 topological twists -see, for example, the diagram in section 1 of [60]. One of these twists is closely related (by dimensional reduction on a circle) to the Donaldson-Witten twist. The relevant holographic construction should begin with four-dimensional N = 4 gauged supergravity. This contains an Spin(4) R gauge field, as required, and is a consistent truncation of eleven-dimensional supergravity on S 7 [61]. The uplifted solutions should be holographically dual to twists of the ABJM theory [62] on N M2-branes, in the large N limit. This is currently under investigation [63].
• Finally, in this paper we have focused exclusively on the partition function. However, in general TQFTs have non-trivial topological correlation functions, involving the insertion of Q-invariant operators into the path integral. For example, this is true of Donaldson theory, where such insertions are required to obtain non-zero invariants in field theory whenever dim M = d > 0, due to fermion zero modes. Geometrically these invariants arise as the integral of a d-form over M, where this top form is itself constructed as a wedge product of certain closed forms. The operators are constructed via a descent procedure [4]. It would be very interesting to understand the holographic dual computation of these correlation functions. Of course, correlation functions are well studied in AdS/CFT. In the present setting one would again hope to be able to work in a truncated supergravity theory, containing the fields whose boundary values act as sources for the operators. Being topological, the correlation functions should be independent of the positions at which the local operators are inserted, and also independent of the metric. These statements might be proven along similar lines to the present paper. We leave this, and other interesting questions, for future work. | 19,374.6 | 2017-12-01T00:00:00.000 | [
"Mathematics"
] |
Optimizing SVM ’ s parameters based on backtracking search optimization algorithm for gear fault diagnosis
The accuracy of a support vector machine (SVM) classifier is decided by the selection of optimal parameters for the SVM. The Backtracking Search Optimization Algorithm (BSA) is often applied to resolve the global optimization problem and adapted to optimize SVM parameters. In this research, a SVM parameter optimization method based on BSA (BSA-SVM) is proposed, and the BSA-SVM is applied to diagnose gear faults. Firstly, a gear vibration signal can be decomposed into several intrinsic scale components (ISCs) by means of the Local Characteristics-Scale Decomposition (LCD). Secondly, the MPE can extract the fault feature vectors from the first few ISCs. Thirdly, the fault feature vectors are taken as the input vectors of the BSA-SVM classifier. The analysis results of BSA-SVM classifier show that this method has higher accuracy than GA (Genetic Algorithm) or PSO (Particles Swarm Algorithm) algorithms combined with SVM. In short, the BSA-SVM based on the MPE-LCD is suitable to diagnose the state of health gear.
Introduction
The gearbox is the most crucial transmission mechanism in a rotating machine, and the implementation of online monitoring and diagnosis has become quite urgent and necessary.The vibration signal of a gearbox contains much noise and unstable factors thus the vibration signal is a typical non-stationary signal.The fault signal characteristic is very weak and is usually masked by noise, especially when the fault is in its early stages; thus, it is very difficult for getting the fault features [1].There are three steps in diagnosing fault gears, namely: characteristic signal detection, feature extraction and fault classification.Generally, the process of extracting the features of faults is divided into selection and extraction.In both theoretical and experimental fields, the fact of selection and extraction of fault features is the common principle in diagnosing faults.Feature extraction is an important step in the fault identification.In case of incorrect or incomplete feature extraction, it unavoidably leads to the fact that the classification and false diagnosis are made with errors [2].So, the extraction of the effective information about the fault characteristics from complex dynamic mechanical signals is the key factor to solve the problem of large-scale complex mechanical and electrical equipment fault diagnosis.
The corresponding characteristics of the vibration signals will be presented, and getting the extraction of the fault information of a non-stationary vibration signal is the crux of gear fault diagnosis.Vibration signals are usually processed by decomposing the original signal, such as fast Fourier transform, wavelet transform, Hilbert-Huang transform and EMD method [3].This is one of the efficient techniques in diagnosing faults based on of time-frequency resolution, i.e.EMD is rather good at extracting the Intrinsic Mode Functions (IMFs), or mono-component functions embracing the original signal [4].But the EMD method has several shortcomings, such as the problem of mode mixing, distorted components and time-consuming decomposition [5].To this end, Wu and Huang have proposed a method namely EEMD [6].Pattern recognition is another side in diagnosing gear faults.
Recently, Cheng et al. have developed a new novel signal decomposition approach named as the local characteristic scale decomposition (LCD).Similar to the EMD, the LCD method is used to decompose a complex signal into several ISCs and a residue [7].Because the LCD is the same EMD method, it is a kind of data-driven and adaptive non-stationary signal decomposition, so it is suitable for processing non-stationary signals such as vibration signals of gears.
Support vector machine (SVM) proposed by Vapnik et al. is a promising classification method and has been successfully applied to many engineering fields, such as face recognition, credit scoring and mechanical fault diagnosis, etc. [8].SVM has the high generalization ability that depends on the adequate setting of parameters such as penalty coefficient and kernel parameters .Therefore, the selection optimal parameters are essential and important to obtain a good performance in handling learning task of SVM [9].In the past several years, the optimal SVM has gained great attentions.There are several methods for setting the parameters of SVM such as the grid search, the cross-validation method, and the gradient descent method [10].These methods have several drawbacks; for example, the performance of grid search is sensitive to the setting of the grid range and coarseness for each parameter, which is not easy to be set without prior knowledge, and the cross-validation method requires long and complicated calculations.
The Backtracking Search Algorithm (BSA), which has been developed by Civicioglu, is an evolutionary algorithm (EA) to solve problems of real-valued numerical optimization.Selection, mutation and crossover are three well-known operators which are based on the BSA method but others like the genetic algorithm (GA) and differential evolution (DE) [11].Furthermore, unlike many other metaheuristics, the BSA algorithm has only one control parameter and is not very sensitive to the initial value as reported therein.Since it had been introduced, the BSA attracted many researches, and it was applied to various optimization problems.Some successful examples are described below.In [12], a comparative analysis of BSA with other evolutionary algorithms for global continuous optimization was given.In [13], the BSA was used for the antenna array design.In [14], the BSA was employed for the design of robust Power System Stabilizers (PSSs) in multi machine power systems.In [15], it was used for the allocation of multi-type distributed generators along distribution networks [16].The problems of real-valued numerical optimization were quickly resolved by the BSA method, and the experiments demonstrated better results than EAs.Because of applying only one control parameter, thereby, researchers got more advantages in conducting experiments.For analyzed positive point, the paper will apply BSA in the combination with the SVM aiming at diagnosing faults gears, so-called BSA-SVM.
In this paper, the MPE-LCD based on BSA-SVM is used to diagnose and classify gear faults.Firstly, the original vibration signal is decomposed by the LCD method into some ISC components.Then, the first few ISCs are chosen and extracted the fault feature vectors by the MPE algorithm [17].Thirdly, in order to identify the work condition of gear, in this paper, the SVM is served as a classifier [18].Furthermore, the authors proposed a new optimal algorithm named as BSA to make the optimization for the SVM parameters (BSA-SVM).The characteristic vectors of the stationary ISCs are taken as the input data of the BSA-SVM, and then the gear faults and the normal gear condition can be differentiated [7].Furthermore, to ascertain the superiority of the MPE-LCD based on BSA-SVM, it is compared with EMD and EEMD combined with GA-SVM, PSO-SVM and SVM.The analysis results show that the diagnosis approach of BSA-SVM assisted with MPE-LCD is suitable for processing non-stationary signals, such as vibration signals of gears, with a higher accuracy and effect when compared to other methods.
The content of the paper composes as: Section 2 and 3 are dedicated to the LCD and MPE methods.In Section 4, the parameter optimization algorithm based on the BSA method is addressed.Section 5 refers to characteristic energy extracted from a number of ISCs which played as input vectors of BSA-SVM.The most important content will be shown in Section 6.This not only introduces the optimization algorithm linked to BSOA-SVM and LCD but indicates its strong points in practice.Eventually, Section 7 will make some conclusions.
Local characteristics-scale decomposition method
Similar to EMD, the LCD is a self-adaptive data decomposition method.Based on the local characteristic scale, the LCD establishes a new definition of a mono-component with physical meaning [19].Any complex signal () can be decomposed into several ISCs and the residue by using the LCD [18], as follows: where () is the th ISC and () is the residue.The assessment criterion for ISC is as follows: all the maxima are positively, and all the minima are negatively in the whole data set.As shown in Fig. 1, any two adjacent maxima (minima), ( , ) and ( , ) are connected by a straight line.To the intermediate minima (maxima) ( , ), its corresponding point ( , ) on this straight line is as follows: For assuring the smooth and symmetric features of the ISC, the proportions of and remain constant, as follows: where is a proportional coefficient.When is a constant, the proportions of and can remain constant.Generally, is set as 0.5, thus, / = -1.At this time, and are symmetrical for the -axis ensuring the symmetric feature of the achieved ISC.Without losing the generality, the signal is shown as: where: In Fig. 1 the waveforms of (), () and () are shown.As a constant sequence or monotonic sequence, () represents the average trend of the signal generally.The ISC component ( = 1, … , ) represents a signal of different frequencies from high to low.The signal frequency component in each frequency band changes according to the original signal.The decomposition process of the LCD is as follows (the whole decomposition process ends when the SD criterion [17] is satisfied, and a value of SD = 0:3 was set in this study): (1) Assume the number of extrema of the signal () is , and determine all the extrema ( , ) ( = 1, … , ) of the signal ().
If ℎ () is an ISC, take it as the first ISC of ().Otherwise, take it as the original signal and repeat the above process until ℎ () is an ISC after iterations of the computation.Afterwards, ℎ () is expressed as the .
(5) Separate the first from (): (6) Take the residue as the original signal to be processed.Repeat the above process until , … , and the residues are obtained, as shown in Eq. ( 1).
The original signal is disintegrated into several ISCs by applying LCD, and the first few ISCs have the highest frequency and largest energy, while the last few ISCs remain relatively moderate.However, high frequency modulations still exist, which can interfere with the feature extraction.The TEO is used to demodulate the obtained ISC signal in this paper.
Permutation entropy
Bandt and Pompe suggested the Permutation entropy (PE) in order to get the detection of the dynamic change in the time series by making a comparison between the neighboring values.The concept and steps to calculate PE values are described as follows [14]: A time series given with the length (), = 1,2, … , , then the dimensional vector at time can be constructed as: = (), ( + ), . . ., [ + ( − 1)] , = 1,2, . . ., − ( − 1), where is a new time series, represents the embedding dimension, and is time delay.As described in [17], the has a permutation , ,..., ; if it satisfies that: where 0 ≤ ≤ − 1 and ≠ .Therefore, for an -tuple vector, there are ! possible distributions.Furthermore, we define the relative frequency for each distribution as: where represents the number of which is consistent with the type .The definition of PE with m dimension can be written as: Note that attains the maximum value (!) when = 1/!Then the normalized permutation entropy by ln(!) could be formulated as: It can be known that () satisfies that 0 ≤ () ≤ 1).
It would be concluded that a bigger value shows the time series is more random and irregular.In case of a smaller value, it can be deduced that the time series is more regular and periodic.In extreme cases, the time series is a white noise, the value is one, also in the case of predicted signal (sine or cosine), the value is zero.Therefore, PE may be utilized to make the estimation for the complexity as well as a dynamic change for a given signal.
Multi-scale permutation entropy
Costa [23] developed the multi-scale analysis algorithm to estimate the complexity of the original time series based on alternative scales.Aziz and Aric proposed the multi-scale permutation entropy (MPE) on the basis of multi-scale analysis definition.The MPE algorithm embraces two steps: firstly, getting multiple scale time series from the original time ones used the coarse-grained procedure, secondly, the determination of permutation entropy value for every coarse-grained time series [19].This process is summed up as: (1) A time series is given as (), = 1,2, … , .Splitting into disjointed windows is different in terms of length .The coarse-grained time series at a scale factor in which means a positive integer that may be constructed in accordance in Eq. ( 12): (2) In the MPE analysis, the PE of each coarse-grained time series is calculated based on Eqs. ( 7)- (11) and then plotted as the function of the scale factors, which can be expressed as: (, , , ) = , , .
We calculated the PE of each coarse-grained time series as a function of scale factor and then plotted it as a function of the scale factor, and call this procedure as the multi-scale permutation entropy.In order to select the best for MPE calculation, we take the Gaussian white noise signal with length = 2048 as an example.The MPEs are calculated under embedding dimension = 4, 5, 6, and 7 when the parameters maximal scale factor = 12 and = 1 [7].
Backtracking search optimization algorithm
The main content of BSA method is to generate a trial population including two new crossover and mutation operators.The BSA strategy is considered as powerful exploration because this method both generates trial populations and controls the amplitude of the search-direction matrix and search-space boundaries [21].Particularly, the BSA stores a population from a randomly chosen previous generation to use it in generating the search-direction matrix.The BSA strategy is very strong in terms of both a global exploration and local exploitation with a good feature of local minima avoidance [22,23].The main five procedures of the BSA method are presented as follows in Fig. 5.
Problem and Algorithm Parameter: An overview of five processes is provided as follows.
In which: NP is the size of population (Pop Size).DP is the dimensional number of the problem.random is a real value which is uniformly allocated between 0 and 1. stands for the lower bound for the th factor of the th individual. is the upper bound for the th factor of the th individual.
Selection-I
The generation of is the historical population, in the procedure of the Selection-I, and it is used to make a calculation for the search direction.The formulation is as follows: In each iteration, is defined as follows: In which: ≔ means the updated operation.Two random numbers, and , are within the range [0, 1].The above equation assures that the BSA algorithm population might be randomly chosen from a historical population.The algorithm memorizes the historical population till it is changed via a random permutation.
Mutation
Initially, the trial population is created via mutation operation as per the bellow formulation: where is a scale factor which controls the amplitude of mutation ( − ).
In this paper, = 3 • , where is a random real number with uniform distribution within the range (0, 1).By involving the historical population in calculating − , BSA learns from its memory of previous generations to obtain a trial population [24].
Crossover
The final trial population is generated by crossover.The guidance of the trial individuals characterized by improved fitness values supports the search direction in optimizing the problem.BSA crossover works as the following procedure.A binary integer-valued matrix (map) of size × is computed in the first step.
The individuals of are generated by utilizing relevant individuals of .If , = 1, is updated with , : = , , .
Selection-II
In the stage of Selection-II, makes the corresponding better in the manner of fitness value which is utilized for updating [21].As the BSA finds the best optimal value ( ) which is dominant comparing to all previous ones, takes the place of all optimal solutions, and these values are updated as the most fitting value of .
Support vector machine (SVM)
The SVM is a type of machine learning technique.The SVM relies on the theory of statistical learning.The SVM is handling the training samples as the input to a higher-dimensional characteristic space through the use of a mapping function .Assuming that there is a given set of the training samples = ( , ), = 1,2, . . ., in which each sample ∈ belongs to a class by ∈ +1, −1 , and the training data is not linearly separable in the space of feature, then the target function can be expressed as follows [17]: In which is the normal vector of the hyperplane, is the penalty parameter, is the bias, is non-negative slack variables, and () is the mapping function.
By introducing a set of Lagrange multipliers ≥ 0, the optimization problem could be rewritten as: The function of making decision can be achieved as: The SVM method used the radial basis kernel function which is the most common kernel function as it indicates in the bellow equation: where is the kernel parameter.
SVM's parameter optimization relied on BSA
The performance of SVM is significantly impacted by its parameters.It needs to be selected as a penalty factor and kernel parameter in the function of Gaussian kernel.It is not easy to select these parameters.As a whole, and are chosen empirically.Therefore, this paper applied the BSA as a technique for optimizing the parameters of SVM.As a consequence, and played as the variables.SVM error testing became the fitness function for the sake of optimization.
The SVM error testing is made as: where = (; ) and the test error of SVM was defined as: where and mean the numbers of true and false classified samples, respectively.The desirable value is too small to ensure high classification accuracy.
Experimental results
The paper used Thyroid, Seed and Escherichia coli (E.Coli) as three common sets of benchmark data provided by the University of California, Irvine to quantify the performance of the suggested BSA-SVM method.
Table 1 showed the training and test sets.Every sampling set is separated into two subsets.In which, one subset is used for SVM training, and the other is for testing the obtained model.The training set accounted for 70 %, and the test set took 30 % of the total samples.Based on the trial and error detection practice, this proportion was selected in order to evaluate the performance of the achieved SVM which was optimal for the available samples.
Three data sets were used to make classification among the BSA-SVM, GA-SVM, PSO-SVM, and CMAES-SVM methods.The values selection is the same in all four above methods in order to make comparison fairly (e.g., iteration = 30 and Pop Size = 30).For the PSO, the parameters were fixed with the values given in the literature [22,26] (i.e., = 0:9, = 0:5, and = 1:25).For CMAES, the parameters were fixed with the values given in the literature [24] (i.e., = 0:25 and = 4 + 3log()).Each testing method result was considered as the average value of 30 runs.
The training data and test data were both mixed and were randomly divided, as shown in Table 1.
Lin et al. argued that [25] the lower and upper bounds of were given in [0:01; 35000] and in [0:01; 32] for the BSA-SVM, GA-SVM, PSO-SVM, and CMAES-SVM classifiers.Each search method gave the values of and in order to give the smallest value of the classification error.Table 2 presented experimental results.
The effectiveness of the proposed method was illustrated in Tables from 2 to 4 on the basis of the detailed classification results of each data set.The Thyroid and Seed data sets encompassed three classes, thus, the authors needed two SVM classifiers.The E. Coli data set embraced five classes; therefore, the authors needed four classifiers.These tables show the optimal parameters ( and ), average test error, and average cost time done by different algorithms.The results of the E. Coli, Seed and Thyroid classification were given in Table 1.The next Table indicates that the cost timing and testing error of BSA-SVM was a bit lower in comparison with those of other methods as GA-SVM, PSO-SVM, and CMAES-SVM.Civicioglu said that the BSA utilized a mutation mechanism in which there was a complex crossover and one individual [25].Moreover, by using its memory, the BSA took an advantage of the experiences achieved from previous generations.Tables [2][3][4] demonstrate that the BSA-SVM classifier achieved higher classification accuracy in the way of a shorter time as comparing to other methods.Next, the BSA-SVM method was used to diagnose a gear fault.
Gear fault diagnosis method based on BSA-SVM and LCD-MPE
Sections 2 and 3 showed the applied LCD in the combination with the MPE under different work status to analyze a gear vibration signal.Thereby, fault patterns are clearly recognized via the energy of each ISC changed when the gear malfunctioned.The BSA-SVM input vector is the energy feature of every ISC component.By doing so, the work status and fault patterns of the gear can be identified.(1) Choose several signals as samples under three cases: gear normal, gear chipped, and gear broken.
(2) Disintegrate the original vibration signals by the LCD method into several ISCs.The first mISCs which include the most dominantly information of the fault gear will be chosen to extract the feature.
(3) Calculate MPE for each gear vibration signal getting from the first ISCs with the parameter selection: = 6; = 1; = 2048, and the maximum scale factor = 12.(4) Then the MPEs are obtained in all scales, and are viewed as the feature vector to represent the main fault information of gear vibration signal.
(5) Run the BSA-SVM, with the parameters of SVM, named as and , which are optimized by the BSA.
Application of BSA-SVM and LCD-MPE to diagnose gear faults
Data collected in this paper comes from the public data sets distributed by the Prognostics and Health Management (PHM) Society [29], and the data sets were sampled synchronously from two accelerometers mounted on each side of the gearbox housing shown in Fig. 7. Data collection was implemented at five different shaft speeds of 30, 35, 40, 45 and 50 Hz under a low and high load from the brake with the sample frequency = 200/3 kHz.The low and high loads have resulted in a small difference of shaft speed, each load case is repeated four times.There are totally 560 different cases to be diagnosed, with six cases using helical gears and the other eight spur gears.On the input shaft, the tachometer generates 10 pulses per revolution and data from the tachometer is very accurate.Here we chose the helical gear to research with three states: normal, chipped tooth and broken tooth, and the sampling frequency can be taken as 1024 Hz.The vibration signals of helical gear with three conditions (normal, chipped tooth and broken tooth) were tested, and 40 vibration signals from the helical gear in each condition were taken from 6 groups collected at random as the test data.5.
The input vector of the BSA-SVM method uses the LCD or EMD analysis as preprocessor to extract the energy in each frequency band in identifying the fault gears.These input vectors affect the identification of the fault gears.The frequency components after decomposition by LCD-MPE has better results than the ones in the EMD and EEMD method, therefore, the method BSA-SVM based on LCD is better than that based on EMD.The experimental results were demonstrated in Table 5 and Fig. 8.As a result, the accuracy of identification is good, and the time is shorter.In practical cases, they may be used for a fault diagnosis.
Conclusions
Basing on the feature of non-stationary of gear fault signals, a method of diagnosing a faulty gear based on the LCD-MPE and BSA-SVM is proposed.LCD and MPE were firstly used to pre-process different types of vibration signals.BSA was then employed to make the optimization for the parameters of SVM aiming at increasing the sensitivity and accuracy of SVM.This combination used to classify data of the work condition of the gear is called as BSA-SVM.When the working condition of gear changes, then the fault feature extraction is also changed indicating that the energy of each frequency component is changed when the gear with different faults is operating.Thus, the feature vector of each ISC component is adopted as input features for BSA-SVM to identify the gear work condition.Based on experimental results, some conclusions of the paper are: (1) LCD is a method of processing self-adaptive signals, which can be applied to nonlinear and non-stationary processes faultlessly.
(2) BSA is an optimal algorithm that is used to make the optimization for the parameters of SVM network.Experimental results above show that the BSA and SVM combine better than other optimization algorithms (GA, PSO) combined with SVM.
(3) The successful combination of LCD-MPE with BSA-SVM identifies the work condition and fault patterns for the gears and provides an effective tool for intelligent fault diagnosis of gears.
(4) The BSA-SVM method that took the fault feature vector of each frequency component based on LCD combined with MPE as the input features has greater identification power than that based on EMD or EEMD combined with MPE.
In short, the paper presents another approach to gear fault diagnosis using a chain of data mining methods.Firstly, the LCD is used for vibration signals decomposition.The LCD is an improved form of the Huang-Hilbert Transform.Thanks to this, the problem of the mode mixing in this transform is removed.It is well visible in diagnosis results presented in Table 5. Next, the MPE is calculated as a feature characterizing each ISC obtained after signal decomposition by the LCD method.Then the MPEs values obtained in all scales are used as the feature vector to SVM classifier.However, the paper authors have improved the effectiveness of this classifier by optimizing its parameters with the aid of backtracking search optimization algorithm (BSA).Thanks to such a novel approach to the whole procedure of data mining applied to gear fault diagnosis they propose a very effective methodology which is able to perform the bearing different fault type diagnosis in a very reliable and robust way.According to the results presented in Table 5, the combination of LCD-MPE with BSOA-SVM can identify the work condition and fault patterns of the gears with the test error equal to 0 % for all the tested cases.
Fig. 1 .Fig. 2 .Fig. 3 .
Waveform of signal: a) (), b) (), c) () Result of signal (10) decomposed by EMD aResult of signal decomposed by LCD (3) All the ( = 1, … , ) are connected by a spline line (), which is defined as the base line of the LCD.By contrast, EMD's base line is defined as the mean of the upper and the lower envelopes.(4) The difference between the data and the base line () is the first component ℎ (): ℎ () = () − ().
Fig. 6 .
Fig. 6.Flowchart chart of gear fault diagnosis based on LCD-MPE and BSA-SVM Fig. 6 shows the flow chart of the LCD-MPE and BSA-SVM method for the gear fault diagnosis.The description of the method for diagnosing gear faults is as follows:(1) Choose several signals as samples under three cases: gear normal, gear chipped, and gear broken.(2)Disintegrate the original vibration signals by the LCD method into several ISCs.The first mISCs which include the most dominantly information of the fault gear will be chosen to extract the feature.(3)Calculate MPE for each gear vibration signal getting from the first ISCs with the
Fig. 7 .
Fig. 7. Common test rig Firstly, the vibration signals of each group with three conditions named as normal, chipped tooth and broken tooth were decomposed by LCD-MPE into a number of ISCs.The first seven ISCs embrace the most dominant information which is selected and arranged in accordance with the frequency components as (), (), … , () from high to low.Then, the fault features vector was obtained following the MPE method.Finally, the BSA-SVM was used to identify the various patterns.In order to have a fair comparison, the original vibration signals are chosen similarly.These signals are decomposed by LCD into ISCs, of which, the first eight IMFs are selected and arranged in the manner of high to low values basing on frequency components as (), (), … , ().And then, the feature vector is taken into the input of BSA-SVM.The identification results for the test samples based on LCD-MPE and compared with EMD and
Table 1 .
Parameters of datasets
Table 2 .
Results of thyroid data set Method Training samples Test samples Average cost time (s) Average test errors (%) Refs.
Table 4 .
Results of E. Coli data set MethodTraining samples Test samples Average cost time (s) Average test errors (%) Refs. | 6,622 | 2019-02-15T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Designing optimal protocols in Bayesian quantum parameter estimation with higher-order operations
Using quantum systems as sensors or probes has been shown to greatly improve the precision of parameter estimation by exploiting unique quantum features such as entanglement. A major task in quantum sensing is to design the optimal protocol, i.e., the most precise one. It has been solved for some specific instances of the problem, but in general even numerical methods are not known. Here, we focus on the single-shot Bayesian setting, where the goal is to find the optimal initial state of the probe (which can be entangled with an auxiliary system), the optimal measurement, and the optimal estimator function. We leverage the formalism of higher-order operations to develop a method based on semidefinite programming that finds a protocol that is close to the optimal one with arbitrary precision. Crucially, our method is not restricted to any specific quantum evolution, cost function or prior distribution, and thus can be applied to any estimation problem. Moreover, it can be applied to both single or multiparameter estimation tasks. We demonstrate our method with three examples, consisting of unitary phase estimation, thermometry in a bosonic bath, and multiparameter estimation of an SU(2) transformation. Exploiting our methods, we extend several results from the literature. For example, in the thermometry case, we find the optimal protocol at any finite time and quantify the usefulness of entanglement.
I. INTRODUCTION
Quantum parameter estimation, also known as quantum metrology or quantum sensing, is at the heart of quantum technologies [1].The quantitative assessment of some properties of a system, such as magnetic field amplitude, length, temperature or chemical potential, to name a few, is a key task for science and industry.A sensor is a device which manipulates probes interacting with the system of interest in order to readout its properties.Loosely speaking, the sensing becomes quantum, whenever the manipulation of the probes and their interaction with the measured system is governed by quantum physics.Quantum metrology has been very successful in advancing technological frontiers as showcased in several experiments, namely, the detection of gravitational waves [2,3], thermometry [4,5], magnetometry [6,7], and phase estimation in optical platforms [8].
The theory of quantum metrology aims at developing protocols that use optimally the probes and other metrological resources-such as quantum correlations, coherence and measurement time-in order to estimate the parameter with minimal error [9][10][11][12][13], and uncovers ultimate limits on the achievable estimation precision [14][15][16][17][18][19].These limits are usually expressed as bound on the Fisher information (matrix) that must hold in a certain context and are related to the mean squared error (MSE) via a Cramér-Rao type bound [20][21][22][23][24][25].In the single-parameter case, such bounds are often saturable in the regime where the protocol is repeated many times [26,27].*<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>in the limit of small data, such bounds are not generally saturable, and furthermore the MSE, addressed by the Carmér-Rao bound, may not be the best quantifier of the estimation precision.Such problems can be attacked from the perspective of the full Bayesian framework.In the Bayesian approach, one starts with a prior distribution (belief) of the parameter and updates it through the protocol based on the observed measurements results.Crucially, the choices of prior distribution and the cost (or reward) function have a substantial impact on the optimal protocol.Finding such optimal Bayesian protocols, is one of the key problems in metrology.This is a non-trivial task even in the case of single-shot scenarios, where the protocol is described by the combination of the initial state, the final measurement, and the estimator function.Optimal protocols are only known for a few highly-symmetric specific cases (see Ref. [28,29] for a review), and for specific cost functions in the singleparameter regime [30,31], while general effective numer-ical methods for finding them are lacking.
We therefore dedicate this work to address the shortcomings of quantum metrology within the single-shot Bayesian framework.Namely, we exploit the formalism of higher-order operations [32][33][34][35] to combine two pivotal aspects of the estimation protocol-the quantum state and the measurement, referred to as the quantum strategyinto a single and equivalent higher-order transformation, called quantum tester [32,36,37].While the standard approach to metrology typically involves the optimization over state and measurement individually [38][39][40], often in a non-efficient, heuristic manner, quantum testers allow us to optimize over the quantum strategy altogether, finding the optimal state and measurement efficiently with a single instance of a semidefinite program (SDP).Originally a tool applied to tasks such as channel discrimination [36,41,42], the higher-order operations formalism was recently extended to quantum parameter estimation problem, both in the frequentist setting in order to maximize the Fisher information of a protocol [43,44] and in the Bayesian setting in order to maximize the probability of a fixed-width credible interval [40].In this work we focus on single-shot Bayesian setting and show how to leverage the properties of higher-order operations in order to efficiently optimize the estimation protocol with respect to any reward function.
We propose three different methods to integrate the optimization of the quantum strategy, i.e., state and measurement, with the optimization of the estimatorstherefore finding the optimal overall protocol within arbitrary precision.Our methods take into account both numerical and practical limitations, finding application in a wide range of realistic scenarios.It is furthermore appropriate to any estimation problem regardless of prior distribution, reward or cost function, or the type of quantum evolution.Moreover, we show how these methods can be straightforwardly adapted to multiparameter estimation problems.
To demonstrate the merit of our approach, we present three case studies where we apply our methods to relevant parameter estimation problems: phase estimation, thermometry, and SU(2) estimation.These examples cover single and multiparameter problems, both unitary and non-unitary evolution, reward or cost functions of varying nature (e.g.fidelity and MSE), and different prior distributions (e.g.uniform and Gaussian).Moreover, we use one of our case studies, thermometry, to show how our approach can be adapted to approximate quantum strategies that do not permit for entanglement between the probe and an auxiliary system for their implementation.This allows us to demonstrate that entanglement provides an advantage over no-entanglement strategies in a finite-time temperature estimation task.Our techniques can be similarly used to answer whether entanglement can be useful in other estimation tasks, and put a lower bound on the usefulness of entanglement.In the thermometry problem, we also find the optimal protocol in finite time, which was previously only known in the frequentest regime [45], and show that the estimation precision only decreases with t → ∞.
All the code developed for this work is made available in our open online repository [46].
A. Bayesian parameter estimation
In a standard metrology problem, one is interested in estimating an unknown parameter θ by encoding it into the quantum state of a probe.The encoding process can be described by a quantum channel-a completely positive and trace preserving map-which we denote by E θ : L(H I ) → L(H O ) where H I and H O are the Hilbert spaces of the input and output systems of the channel, respectively.When probing the channel, it is in general more advantageous to also use an auxiliary system which is initially entangled to the probe-but does not go through the channel, as sketched in Fig. 1.In other words, one considers the extended channel E θ ⊗ id, where "id" is the identity channel acting on the auxiliary system.The chosen global input state, given by the density operator ρ ∈ L(H I ⊗ H aux ), is then mapped to a global output state ρ θ := (E θ ⊗ id)[ρ] by the extended channel.In order to extract the information about the parameter θ encoded in this state, one performs a joint measurement ) in the auxiliary system and the output state of the channel.Finally, in the considered setting, one designs an estimator θ that assigns an estimate θi to the true value of the parameter θ, conditioned on each measurement outcome i.The quality of the estimation can be then quantified by setting some score (cost) function, evaluating the closeness (deviation) of the estimator to the true parameter value.Indeed, the score should depend on the protocol ; i.e., the triplet of the initial state, the measurement, and the estimator {ρ, {M i } i , { θi } i }.A central problem in quantum metrology is finding the optimal protocol.
In the Bayesian approach, one starts with a prior belief in the parameter value given by a probability distribution p(θ).After the measurement, described by the Born rule one uses the Bayes' rule to update the distribution of the parameter based on the observed outcome i where the normalization factor is defined as p(i) := dθp(i|θ)p(θ).
The performance of the estimation strategy can be quantified according to a score.Generally, this can be cast as where r(θ, θi ) is a reward or cost function that quantifies the difference between the parameter θ and each estimate θi .A particular choice of cost function is the MSE r MSE (θ, θi ) = (θ − θi ) 2 .In light of this definition, it becomes clear that the optimal protocol will be the one that either maximizes or minimizes the score S, depending on whether r(θ, θi ) is a reward or a cost function, respectively.
As previously mentioned, this problem does not have a known analytical solution in general.and efficient numerical methods have only been proposed for a few special problems [39,47].In this work we provide an efficient algorithm that approximates the solution with arbitrary precision, and works for all cost functions and number of parameters.
B. Quantum testers: the quantum strategy as a higher-order operation A typically cumbersome part of metrology and estimation problems is the optimization of the quantum strategy, i.e., of the state and measurement that is used to probe the channel that encodes the parameter to be estimated.Here, we apply techniques from the formalism of higherorder operations [32][33][34][35] to fully characterize the set of quantum strategies applicable to a given estimation task.We then use this reformulation to efficiently optimize over quantum strategies using semidefinite programming [48][49][50].In particular, we exploit the connection between the states and measurements and an object of the higher-order formalism called a quantum tester.
While quantum maps describes transformations of quantum states, higher-order operations (also called supermaps) describe transformations of quantum maps themselves.The equivalent of a POVM in this formalism is a quantum tester -the most general higher-order transformation that maps quantum channels to a probability distribution, effectively "measuring" a quantum channel and yielding a classical outcome with some probability.As illustrated in Fig. 1, a tester T is equivalent to the concatenation of a state ρ and a measurement M .Nevertheless, as we now explain, the Born rule in Eq. (1) becomes linear in the tester variable, which is characterized by simple SDP constraints.We then exploit these two properties to efficiently optimize over the quantum strategies.
In order to express an estimation problem in terms of testers, we start by restating the problem using the Choi-Jamiołkowski isomorphism [51,52].In this repres-entation, a map E θ : L(H I ) → L(H O ) can be equivalently expressed as an operator Using the Choi operator, the output state of the probe can be expressed as (5) where (•) T I denotes the partial transposition over the input space H I .Then, the probability of obtaining outcome i, as in Eq. ( 1), can be equivalently written as We can now group the objects that constitute the quantum strategy, that is the state and the measurement, into a single object called the quantum tester [32,36,37].
) is a set of N O (standing for the "number of outcomes") operators defined as which allows one to rewrite the probability of obtaining outcome i, in Eq. ( 7), as simply The usefulness of this representation comes from the fact that, as shown in Ref. [32,36,37], testers have a simple mathematical characterization.More specifically, they obey the following set of necessary and sufficient conditions: i where σ ∈ L(H I ), σ ≥ 0 and tr(σ) = 1.It is straightforward to see that every set of operators T that satisfy Eq. ( 8) also satisfy Eqs.(10) and (11).The converse is also true.Given any set of operators T that satisfy Eqs.(10) and (11), one can define a state ρ and measurement such that The state ρ and measurement M are called a quantum realization of the tester T .This realization is not unique, as different sets of states and measurements can lead to the same tester.However, crucially, different states and measurements that lead to the same tester will also yield the same probability distribution {p(i|θ)} i in Eq. ( 1), and have the same performance in an estimation task.Hence, the optimization of any linear function of p(i|θ) in Eq. ( 9) over a tester T = {T i } that satisfies Eqs.(10) and ( 11) is a semidefinite program, and its optimal tester is guaranteed to have a quantum realization in terms of a quantum state and measurement.Importantly, once the optimal quantum strategy (i.e.tester) is found, the corresponding optimal state and measurement can be easily determined using Eqs.( 12) and (13).
Notice that, while a tester is a set of operators that act only on the input and output space of the channel C θ , its quantum realization may require an auxiliary system.This implies that the optimal quantum strategy may require entanglement between the target and auxiliary systems, and a global measurement that acts on both of these systems.The dimension of the auxiliary space is bounded to be at most the dimension of H I , as established by the explicit construction of ρ in Eq. ( 12).The auxiliary system can also be interpreted as a (quantum) memory.Hence, by optimizing over testers, one is effectively optimizing over all possible quantum strategies, including those that may require memory/entanglement for their implementation.
However, certain experimental limitations might induce a situation in which it is necessary to design a quantum strategy that does not require entanglement for its implementation, or a means to certify whether entanglement is indeed advantageous in a given estimation task.In App.A we provide details on how quantum strategies that do not require entanglement can be approximated with SDPs.Moreover, in Sec.V B we provide an example of a temperature estimation problem in which our methods demonstrate a clear gap between the performance of strategies operating with and without entanglement.
III. OPTIMAL TESTER FOR METROLOGY VIA SEMIDEFINITE PROGRAMMING
Using quantum testers, we can now rewrite the score of an estimation problem in Eq. ( 4) as Now, to find the optimal score of a given estimation task, the optimization of S over the triplet {ρ, {M i } i , { θi } i } can be substituted for an optimization over the pair We may express all dependencies of the score S on the estimates { θi } N O i=1 with a set of operators {X( θi )} N O i=1 , X( θi ) ∈ L(H I ⊗ H O ), which are given by an integral over the parameter θ, defined as These operators encompass all the given information about the task (prior distribution, cost function, and channels in which the parameter is encoded) that does not depend on the quantum strategy.Expressed in terms of these operators, the score is simply For any given set of fixed estimates { θi }, the optimization of the score is given by either a maximization or minimization (depending on the character of the cost function) of S over all testers T .Taking maximization for instance, we have that is the optimal score.The optimization over testers includes the constraints of Eqs.(10) and (11).Since testers T = {T i } are sets of positive semidefinite operators characterized by linear constraints, the above optimal score can be efficiently computed using SDP.Once again, the optimal tester is guaranteed to have a quantum realization, hence for any optimal solution of T that the SDP should return, there exist a probe state ρ and measurement M that can realize it; they constitute the optimal quantum strategy for the given estimators.
Notice that this can be straightforwardly generalized to the multiparameter regime as well.In App.C we provide more details on this case, while in Sec.V C we present an example of the application of our methods to the multiparameter problem of SU(2) estimation.
It is now clear that given the knowledge of the estimator values θi and the operators X( θi ) one can find the optimal tester T efficiently.The remaining difficulties thus are: (1) Finding the optimal estimators { θ * i } leading to the optimal score (2) Computing the integral in Eq. ( 16).
In the following, we construct three different approaches to tackle both of these problems.
IV. PARAMETER DISCRETIZATION AND ESTIMATOR OPTIMIZATION
In situations where the optimal estimators are unknown, or the integral in Eq. ( 16) cannot be calculated exactly, an approximation of the optimal score in Eq. ( 18) can still be computed with SDP.This can be achieved by first discretizing the parameter θ to a finite number of hypotheses, thereby mapping the original parameter estimation task onto one closely resembling channel discrimination.
Concretely, let us choose a discretization of θ such that θ → {θ k } N H k=1 , where N H (standing for the "number of hyphotheses") is the total number of different values assigned to θ.We can then define a prior distribution over the new hypotheses as which is computationally straightforward and has the advantage of giving a valid probability distribution.Now, let's define the discrete equivalent of the operators in Eq. ( 16) as { X( θi )} N O i=1 , where Hence, the approximate score S can be expressed as The value of S will depend on a chosen discretization {θ k } of the continuous parameter θ-the finer the discretization, the better the approximation.Hence, for a given discretization {θ k }, the optimum score is given by either maximizing or minimizing S again over the pair {{ θi } i , {T i } i } of estimates and testers.
In what follows we propose three different methods, all based on semidefinite programming, with which this approximation can be computed.
A. Method 1: Approximating metrology with channel discrimination
The first approach we propose is heavily based on the problem of channel discrimination [53].Its starting point is the realization that, without loss of generality, we may restrict ourselves to testers with as many measurement outcomes as there are hypotheses to be distinguished.In the context of our discretized parameter estimation problem, this amounts to setting N O = N H ; essentially, there is no advantage in increasing the number of measurement outcomes beyond the number of different values in the discretization of θ.The second simplification is to choose the values of the estimates { θi } to be the same as the values in the discretization {θ k }, in such a way that each measurement outcome i is directly associated to a value θi = θ i .Choosing the values of the potential estimates of the parameter to correspond to the values in the discret-ization of the parameter reduces the estimation problem to a discrimination problem.In this case, the task can be interpreted as determining the "classical" label k that is encoded via the values of θ k in the channel C θ k .In this case, the set of operators { X( θi )} becomes where N = N O = N H , and the approximate score S becomes For this fixed values of the discretization {θ i } N i=1 , the optimum value of S over all testers T is an SDP.
This approach circumvents problem (1), of finding the optimal estimators, by setting them to be the same values used in the discretization of the continuous parameter θ; and problem (2), of computing the integral in Eq. ( 16), by discretizing it.In principle, the higher the number of values in the discretization of θ, the closer the estimates are to the optimal estimator.The advantage then is that the optimal score can be found with a single SDP that needs to optimize only over the quantum strategies.The drawback, on the other hand, is that to achieve a good approximation of the optimal estimator, a high number N of values in the discretization are necessary, and since this number is directly associated to the number of measurement outcomes in the quantum strategy, the problem can eventually become intractable numerically and experimentally.In practice, however, as demonstrated in our examples in Sec.V, this method yields very good results with a value of N that can still be straightforwardly handled numerically.
Nevertheless, our next approach is designed to overcome this problem as well.
B. Method 2: Parameter discretization with optimal estimator
One possible way to overcome computational challenges is to fix the number of measurement outcomes N O , and hence the number of tester elements, to a value that is computationally (and experimentally) tractable and increase the value of N H far beyond that.Since, in this case, the complexity of the problem does not depend on N H , the discretization of θ can be arbitrarily fine.However, because the number of values in the discretization of θ can far surpass the number of measurement outcomes, the association of one estimate θi to each discretization value θ k is no longer possible.Therefore, the problem of choosing a "good" set of estimates { θi } is crucial.
Let us start by assuming that the optimal estimator is known to be { θ * i }.Then, the operators { X( θ * i )} amount to for a fixed discretization {θ k } N H k=1 , which can now in principle contain an arbitrarily high number of values N H ≫ N O .The approximate score S then becomes The optimization of S over the quantum strategy is then given by an SDP.This approach essentially takes care of problem (2), of numerically computing the integral in Eq. ( 16), by discretizing the parameter θ in an arbitrarily fine manner, while maintaining the number of measurements low enough to decrease the computational demand of the SDP.Hence, it is better suited for a situation in which the optimal estimator is known.It can nevertheless also be applied to a problem in which only a good guess for the optimal estimator is known, in which case the solution will be an approximation of the optimal score.Otherwise, to overcome problem (1) of finding the optimal estimator in the first place, we combine this approach with an estimator optimization in a seesaw algorithm, detailed in the following.
C. Method 3: Parameter discretization with estimator optimization
This final approach consists of a seesaw between two optimization problems-which are not necessarily SDPsthat will approximate an optimization over both the quantum strategy and the values of the estimates.
A seesaw is an iterative method that alternates between two optimization problems, using the solution of one as the input of the other.In our case, the first optimization problem is the SDP of the previous approach (Method 2).Namely, given where The second optimization problem will then be one that, for a fixed tester {T i }, taken to be the optimal tester of the previous SDP, optimizes over the values { θi } of the estimates.Namely, given where { θi } are N O possible values of θ.
Whether the problem in Eq. ( 28) is an SDP will depend on whether the reward function r(θ k , θi ) is linear on { θi }.In practice, this will often not be the case.Nevertheless, in some cases this problem can be solved analyticallydepending on the form of the reward function, the optimal estimator may be known or it may be found by standard Lagrangian optimization methods.In other cases, heuristic optimization methods may be applied.
This iterative method, although even for a fixed discretization {θ k } does not necessarily converge to the optimal value of S, in practice leads to very good approximations.A relevant point here is that, assuming a situation where the seesaw does converge to the optimal estimator, one may restrict themselves without loss of generality to a maximum number of outcomes N O that is related to the extremality properties of the tester.In principle, since (i) the set of testers T = {T i } is convex and (ii) the function S is linear on each tester element T i , the maximum (or minimum) of S will be achieved by an extremal tester.Analogously to extremal POVMs [54], extremal testers have at most d 2 (non-zero) elements, where d is the dimension of the space upon which the tester (or POVM) elements act.Hence, the number of outcomes N O in the seesaw can be fixed to be at most ), since, for optimal estimators, there is no advantage in optimizing over non-extremal testers.This fact also holds for Method 2 if one is guaranteed to know the optimal estimator.Furthermore, if the cost function is the mean squares error r MSE , then the optimal measurement will be projective (see Appendix A in Ref. [47]), and hence the optimal tester will have at most (d I × d O ) outcomes.We present a case study in Sec.V B, which concerns the problem of thermometry, that precisely falls in this case.
D. Convergence of the Methods
In all three methods above, we encounter some error due to discretization of the integral in finite hypotheses, as well as sub-optimality due to our choice of estimators.The discretization error is expected to vanish as N H increases, since all three methods are based on approximating an integral with a Riemannian sum with an error that vanishes as 1/N H .As for the sub-optimality, let us define the best approximate score S * similarly to Eq. ( 19) but for the approximate score defined in Eq. (22).For large enough N H , and when the cost function is supposed to be maximised, it means that S * ≤ S * while for cost functions that are supposed to be minimised, it means that S * ≥ S * .The sub-optimality roots from the fact that, none of the methods simultaneously optimise over both { θi } and {T i }.Each of the three methods, however, deals with sub-optimality differently.In all three methods one has and thus one can guarantee convergence by choosing N O ≫ 1.In Appendix B we rigorously derive the convergence for arbitrary cost functions and furthermore show that for certain cost functions that we will later use in the case studies (Examples 1. and 2.) the convergence is even faster, i.e., | S * − S * | = O(1/N 2 O ).When we cannot arbitrarily increase N O , Methods 2 and 3 come to the rescue.In particular, if a priori we know what the optimal estimators are, then Method 2 allows to find the optimal testers in one shot.However, it is rarely the case that we do know the optimal estimators a priori.Nonetheless, as we see in the examples below, Method 2 typically finds sub-optimal solutions that are very close to the optimum.Method 3, on the other hand, adds a powerful layer of optimization based on a seesaw between the estimators and testers, and therefore has a higher chance of finding the optimal protocol even with
V. CASE STUDIES
Our methodology for solving the Bayesian parameter estimation problem using higher-order operations offers numerous advantages over conventional techniques in quantum metrology.By proposing to optimize over the input state and measurement with a single SDP, and combining this with effective heuristics for the joint optimization of the quantum strategy and estimator, we overcome the longstanding challenges of the Bayesian approach.Our approach provides a comprehensive and versatile set of techniques that can be applied to any Bayesian estimation problem, setting it apart from most existing methods in the literature.
The key strength of our approach lies in its ability to handle a wide range of estimation problems, without being limited to specific error quantifiers.This universality allows our method to be seamlessly applied to any estimation scenario.Moreover, the techniques we described here are equally effective for single parameter and multiparameter estimation tasks.Finally, unlike most techniques in the Bayesian approach, our methods are not bound by the type of dynamics used to encode the parameter.Whether the parameter is encoded via a unitary evolution (e.g.phase estimation), or a more complex open system dynamics resulting from the probe's interaction with a thermal environment (e.g.quantum thermometry), our approach can be systematically applied and, as we show in the following, delivers consistent results which are very close to the optimal values.
We now delve into the practical application of our methods and explore how they can be applied to determine the optimal estimation strategy for various scenarios encountered in quantum metrology.The examples are deliberately chosen to be cover a wide range of different problems to demonstrate the versatility of our methods.
A. Example 1: Paradigmatic example -Local phase estimation
We start with a paradigmatic task in quantum metrology, namely the single-parameter unitary phase estimation.In this problem a single parameter θ ∈ [0, 2π) is encoded in an n-qubit quantum system via a local unitary channel where S z is a collective spin operator is the Pauli-Z matrix of the i-th qubit.Due to the intrinsic symmetry of the problem, every state of the n-qubit system can be effectively described using an (n+1)−dimensional Hilbert space, i.e. the symmetric subspace [55].Consequently, the n-qubit phase estimation problem can be equivalently mapped into a phase estimation problem of a d−dimensional system with d = n + 1.In this representation the generator of the dynamics, S z , expressed in the computational basis {|0⟩ , |1⟩ , . . ., |n⟩} is given by S z = n i=0 i |i⟩⟨i| [56].For this example, we take a typical reward function in phase estimation, which takes into account the cyclicity of its parameter space, given by We also choose two different priors, one given by a uniform distribution according to where θ min = 0 and θ max = 2π are respectively the minimal and maximal values of the parameter.The other distribution is given by a Gaussian distribution, according to where N is the normalization factor.We set the mean µ = π and the deviation σ = 1.For the discretization of the parameter θ and initial value of the estimators, we fix and respectively.We now discuss how to apply each of our methods to infer the optimal protocol in this case and present the results obtained for the problem of (n = 2)-qubit phase estimation, plotted in Fig. 2.
Method 2.Here we fix the number of outcomes N O ∈ {2, . . ., 10} and set the number of hypotheses to be N H = 1000 ≫ N O .We then discretize the parameter and set the estimators according to Eqs. (34) and (35).For the case of a uniform prior (Eq.(32), Fig. 2(a)), these are expected to be the optimal estimators.For the case of a Gaussian prior (Eq.( 33), Fig. 2(b), these estimators are not expected to be optimal, but are nonetheless used in Method 2, serving as a starting point for the estimator optimization in Method 3.
Method 3.For this method, we take the solution for the testers found using Method 2 for each N O as a starting point, and then optimize over the estimator.We prove in App.B that in this case the optimal estimators are given as a function of the optimal tester, according to where Note that with the definition of Eq. ( 36) the range of the estimator is ], instead of [0, 2π] as defined initially, this has no effect on the expected reward and can be resolved by adding 2π if θ * i < 0. Hence, the second step in the see-saw is solved analytically, and in each round of the seesaw we update the value of the estimators according to the expression above, as a function of the testers found by the SDP in the first step.Here and in the following examples, we iterate these two steps until the gap between the value of the score in subsequent rounds is smaller than 10 −6 .
Results.In Fig. 2 we plot the maximal approximate scores S obtained via the three methods outlined above.In the case of the uniform prior (Fig. 2(a)) we observe that the approximate score very quickly reaches the optimal one, i.e. S ≈ S = 1 2 (1 + cos(π/4)) which was formerly obtained with alternative methods [28,57].All three methods converge very quickly to this solution, already for N O = 3 outcomes, using Method 2 and 3, and for N O = 4 outcomes, using Methods 1.Notice also that, since we start already at the optimal estimator, we observe that for all N O there is no advantage of applying the seesaw in Method 3, since Method 2 already returns the optimal solution.For the case of the Gaussian prior (Fig. 2(b)), we see that again Method 3 converges to a stable value of S with N O = 4 outcomes.Here we can see an initial difference between Methods 1 and 2, which take a fixed estimator, and Method 3, which optimizes over the estimator.Nevertheless, all methods quickly converge to approximately the same value, at N O = 10.
B. Example 2: Non-unitary evolution -Thermometry
Let us now discuss a different instance of single parameter estimation, namely thermometry [58,59].In this case, the unknown parameter is the temperature θ of a sample (or a thermal bath) that is resting at thermal equilibrium, and it is encoded in the probe using a nonunitary quantum channel.We consider the probe to be a two level system (qubit) which is potentially entangled with an auxiliary system-that does not undergo the non-unitary dynamics.At the initial time t = 0, the probe and the sample which are initially uncorrelated start to interact.After some fixed time t, the probe and the auxiliary system will be jointly measured to infer the temperature of the bath.The probe's reduced state ρ p θ = tr A [ρ θ ] evolves according to a standard Markovian quantum master equation [60][61][62][63], i.e.
where H = ϵ |1⟩ ⟨1| is the Hamiltonian of the probe, σ − = |0⟩ ⟨1| and σ + = |1⟩ ⟨0| are the jump operators, and is the dissipator superoperator which captures the effect of the environment on the probe.The dissipation rates Γ in and Γ out are the only temperature dependent parts of the dynamics and are responsible for encoding the parameter.For a bosonic/fermionic environment, we have Γ in = J(ϵ)N B/F and Γ out = J(ϵ)(1 ± N B/F )-where minus sign should be used for fermions, and positive sign for bosons-with J(ϵ) being the bath spectral density while N B/F is the occupation number for the bosonic or fermionic bath, defined as N B = (e ϵ/θ − 1) −1 and N F = (e ϵ/θ + 1) −1 , respectively.In what follows, we focus on the bosonic bath, however our methods can be applied to the fermionic case as well.
The evolution specified by Eq. ( 38) generates an effective quantum channel E θ (t) that imprints the temperature into the probe's state (see App. D for the explicit expression).Note that in our notation we keep the time dependence because we are also interested in the optimal protocol at different times.
As for the cost function, we use the MSE while the prior distribution p(θ) is uniform and given by Eq. ( 32), where we set θ min = 0.1 and θ max = 2 as the minimum and maximum values of the temperature.We discretize the temperature parameter θ and fix the estimators according to Eqs. (34) and (35), respectively.We evaluate the thermometry problem for 100 different time steps, evenly distributed between t = 0 and t = 1.
Let us now discuss how to approach this problem using each of the three methods presented in this work.
Method 2.Here we fix the number of outcomes N O ∈ {2, . . ., 20} and set the number of hypotheses to be N H = 1000 ≫ N O .These values for the estimator are not expected to be optimal but are nevertheless used in Method 2, serving as an starting point for the estimator optimization in Method 3.
Method 3. To apply the seesaw in Method 3, we again begin with the solution provided by Method 2 for each N O and t as a starting point.For the thermometry problem, as was the case for the phase estimation problem, we can analytically express the optimal estimator as a function of the quantum strategy (tester).Since the score is quantified using the MSE, the optimal estimator is simply the mean over the posterior where ⟨θ k ⟩ is given by Eq. (37).Once again, in this example the first step of the seesaw consists in an optimization over the testers while the second step consists in reassigning a value to the estimators as a function of the testers found in the previous step, according to the above expression.
Results.In Fig. 3 we compare the performance of the three methods outlined above as a function of the number of outcomes N O for a fixed time t = 0.05.We observe that Methods 1 and 2 start from a relatively large S, which is now being minimized, that gradually decreases with increasing N O , while Method 3 already starts at a value of S close to where it will converge.While Method 2 quickly converges to the same values of S of Method 3 with increasing N O , at N O = 20 the approximate score predicted by Method 1 is still somewhat above the corresponding one obtained using Method 3.This is a result of the error in approximating the operators X( θi ) using a Riemannian sum with N H elements.Indeed, it is guaranteed that only in the limit N H → ∞ the approximate score in Method 1 converges to the true optimal score.Finally, we observe that Method 3 saturates around its optimal value already for This is expected since, for a MSE cost function, projective measurements are optimal [47].In Fig. 4, we focus on this point, by comparing Method 3 for some fixed values of N O as a function of time t, for the whole interval of time evaluated in this problem.Here we can see that while there is an improvement in increasing N O up to 4, the curve for N O = 16 lies on top of that of N O = 4, demonstrating that there is indeed no advantage in increasing the number of outcomes beyond Another interesting point to make here is that the score clearly depends on time of evolution.In particular, in Fig. 4 we observe that there exist times t < ∞ where the score is much better (recall that for a MSE cost function, the optimal score is being minimized) than at t → ∞.This means that measuring the probe in the transient regime can be advantageous over estimation performed after reaching the steady state.Indeed, this effect has been observed in thermometry previously [64].For the steady state, the optimal measurement strategy is known.In particular, in this case the auxiliary system is useless and the optimal measurement is a PVM in the basis of the probe Hamiltonian [65].However, as we discuss in the next paragraph, here we show that this is not the case for the transient regime, where entanglement with the probe leads to a more precise estimation.Our techniques therefore allow to determine the optimal probe and memory state, as well as the measurements and estimators in this difficult regime.
Finally, in order to investigate the role of entanglement in the transient regime of temperature estimation, we compare two different measurement strategies: a general strategy where the memory can be initially entangled The inset shows the same curves plotted in a log-log scale.We see that there are times t for which the precision of estimation is better than in the steady state t → ∞.This can be understood as an advantage arising from having entanglement with the probe: the entanglement allows the transfer of the information about the parameter into the memory system which is itself not subject to the dephasing dynamics of the master equation.As a consequence, measuring the entangled probe and memory systems before the joint system thermalizes provides a significant advantage.
with the probe, and the scenario where the initial state of the memory and probe is separable.Figure 5 highlights the importance of the entangled auxiliary qubit.To this aim, we have focused again on Method 3, and depict the approximate score as a function of time, for strategies with and without entanglement.As one expects, at the limits of very short time or very long time the two kinds of strategies perform equally.While in the former this is because there has not been enough time to collect and add new information to the prior, in the latter case it is because after a long time, the system reaches a steady state regardless of the input state; namely, of it being entangled or not.However, at the transient regime, we observe that an auxiliary system entangled with the probe can significantly improve the score.Let us emphasize that very often the parameter estimation problem described above cannot be solved analytically and is very difficult to solve numerically.In general, the effective evolution of the probe may result from a complicated master equation, which has to be evaluated many times.In our approach there is no need to evaluate the evolution for each potential state of the probe, as the only thing we need are the Choi operators associated with the effective channels.In this sense, our methods only require to solve the dynamics on a finite grid of parameter values, and thus makes finding the solution more tractable The entanglement of the optimal initial state ρ = |Ψ⟩⟨Ψ| IA in Eq. ( 41) as function of time found by method 3 for NO = 4.
The corresponding score S is given in Fig. 4, the physical parameters are given in Fig. 3.
numerically.
Finally, for the method 3 we investigate the optimal initial probe-ancilla state ρ found by our algorithm for different times t ∈ [0, 1].We consider the case with four outcomes N O = 4, which is found to give the optimal precision.Here the tester has four elements {T 1 , . . ., T 4 }, and the initial state can be computed with the help of equations (11,12).Note that the state ρ = |Ψ⟩⟨Ψ| IA is pure by construction, and its Schmidt diagonal form reads (for p 0 ≥ p 1 ) where n is the Bloch vector corresponding to the state |n⟩⟨n| = 1 2 (1 + n T σ) and basis choice of the ancillary qubit plays no physical role.We find that for all times the optimal state is always Schmidt diagonal in the computational basis for the probe, and the Schmidt state corresponding the the larger value p 0 is always the ground state, i.e. |n⟩ I = |0⟩ I .In contrast, the amount of entanglement in the optimal state depends on the interaction time t, as shown in Fig. 6.In particular, for the two first times t = 0 and t = 0.01 the state is closed to be maximally entangled.At t = 0 any state encodes no information on the parameter.At t = 0.01 this is due to a numerical error, as the score S is still found to be maximal by the algorithm, see the orange line Fig. 4. Then we find that the entanglement in |Ψ⟩, as captured by the value p 0 changes smoothly with t.Asymptotically t → ∞, we know that all initial states do equally well as the dynamics maps prepares as the steady state, which is product with the auxiliary system's state independent of the parameter.Moreover, from Fig. 5 it is clear that the presence of entanglement in the initial state does not give any substantial improvement for t close to one.We have also considered the optimal measurements {M 1 , . . ., M 4 } in Eq. ( 13) found by the algorithm.We see that the POVM elements are given by rank-1 projectors.The states corresponding to the projectors are also Schmidt diagonal in the computational basis (for the probe), and the amount of entanglement first increases and then decreases for t ≥ 0.06 (similar to Fig. 6).All the data about the optimal strategies found with our methods is available in our repository [46].
Lastly, as we have readily pointed out our methods can be simply adopted to other reward functions.As an example, in the Appendix D we address the thermometry problem with the mean square logarithmic error as the reward function-which has gained attention in recent years due to respecting scale-invariance properties [66][67][68].
C. Example 3: Multi-parameter estimation -SU(2) gates
For our final example, we will consider a more complex metrology problem which involves multiple parameters.This is the problem of estimating any qubit unitary-the group SU (2).
As a first observation, note that any qubit unitary operator can be parameterized in terms of three independent parameters θ := (θ x , θ y , θ z ), with 0 ≤ θ i < 2π for all i ∈ {x, y, z}, as Here, σ i for i ∈ {x, y, z} are the three Pauli operators.Since these generators do not commute, the estimation of the unitary U θ (or equivalently the parameter vector θ) is a multiparameter estimation problem.The unitary channel that acts on the probe system and encodes the parameter θ is then given simply by E θ [•] = U θ (•)U † θ , and will have a Choi operator C θ associated to it.
We take a natural reward function that captures how close the estimated unitary is from the actual one, which is the fidelity, i.e., where in this example d = 2.Here C θi -and for later reference C θ k -are defined analogously to C θ , for a vector of estimator values θi = ( θx a , θy b , θz c ) and for a vector of discretization values θ k = (θ x a , θ y b , θ z c ).Here, we again analyze the cases of two different prior distributions, a uniform prior, as in Eq. ( 32), and a Gaussian prior, as in Eq. (33).
The parameter vector θ = (θ x , θ y , θ z ) is discretized into values θ k = (θ x a , θ y b , θ z c ), where each of the three elements follow the discretization in Eq. ( 34), with θ min = −π and θ max = π, and all with the same value of N H = n H . Notice that this will amount to a final number of different discretization values of N H = n 3 H .The initial set of estimators θi = ( θx a , θy b , θz c ) is also set according to Eq. ( 35) for each parameter estimator, using the same values of θ min = −π and θ max = π, and all with the same value of N O = n O .Here again this amounts to a total number of outcomes equal to N O = n 3 O , analogously to the indexation of the parameter discretization.
We now discuss the application of our methods to this specific problem, plotting our results in Fig. 7.
Method 1.To apply Method 1, we again set n H = n O = n for each of the three parameters, amounting to a total number of discretization values and of outcomes equal to N = n 3 .For the results presented here, we look at values of n = N 1/3 ∈ {2, . . ., 10}.
Method 2.Here we fix the number of outcomes n O = N 1/3 O ∈ {2, . . ., 10} and set the number of hypotheses to be N H = n 3 H = 10 3 .Notice that while the total number of hypothesis is 1000, each of the three parameters is discretized in only 10 different values.These values for the estimator are not expected to be optimal but are nevertheless used in Method 2, serving as an starting point for the estimator optimization in Method 3.
Method 3. To apply the seesaw in Method 3, we again begin with the solution provided by Method 2 for each N O as a starting point.In this case, we optimize over the estimators using standard gradient descent techniques.Therefore, the first step in the seesaw is the SDP in Method 2, while the second step is a heuristic search over the estimators for a fixed tester.
Results.In Fig. 7, we plot the maximal approximate scores S obtained via the three methods outlined above.Panel (a) in Fig. 7 concerns the case of a uniform prior distribution and panel (b) that of a Gaussian prior distribution.In both cases we observe Method 3 converging to its final value of S with N
VI. CONCLUSION AND OUTLOOK
We introduced a new set of tools for addressing Bayesian parameter estimation problems, applying techniques from the formalism of higher-order operations and drawing inspiration from the problem of channel discrimination.
The key insight that we exploit consists of describing the quantum strategy, i.e., the state preparation and the measurement (see Fig. 1), as a single operation called a quantum tester.The later is characterized by SDP constraints, and can thus be optimized efficiently.We developed three methods for determining the state of the probe, the measurement, and the estimators in any parameter estimation problem, regardless of prior distribution, reward function, or description of the quantum evolution.
The first method exploits the connection between the Bayesian approach to parameter estimation and quantum channel discrimination.By discretizing the parameter to a finite set of values, and by furthermore associating each value of the estimator to a value of the discretized parameter, one can directly map a parameter estimation task onto a channel discrimination one, albeit with a reward function which inherits the geometry of the original parameter set.We leveraged this connection to create a general method for approximating the optimal solution of the estimation problem within any arbitrary precision.We also proved that our approximation converges to the optimal score.Although this method is conceptually simple and comes with a convergence guarantee, it may nonetheless be computationally demanding since the size of the optimization variables increases with the finess of parameter discretization.Our second method computes an approximation of the optimal quantum strategy for a fixed set of estimators.This method is less computationally demanding in general, but it relies on a good guess for optimal estimators, which is not always available.To address this drawback, our third method iteratively combines an optimization over the quantum strategy and over the estimators, and hence does not require any previous knowledge over the estimator.
A key advantage of our methods is their universal applicability.They can be used for any parameter estimation problem, regardless of the nature of parameter-encoding or the number of parameters to be estimated.To showcase this wide-ranging applicability, we examined three distinct case studies of high practical importance: local phase estimation, thermometry, and SU(2) estimation.
We also developed tools to bound the performance of estimation strategies that do not require entanglement and used them to show that, in the thermometry problem, probe states that are entangled with an auxiliary system lead to a more precise estimation of the temperature parameter, particularly at finite times.
Our work provides a starting platform for the application of higher-order operations to the problem of Bayesian parameter estimation.We conclude by summarizing further research directions that could draw further benefits from this approach.
Generalization to many-shots.The quantum strategies explored here concern a situation in which, at each independent realization of the experiment, one is given access to a single call, or copy, of the channel that encodes the parameter θ.It is also the case that, in more general scenarios, where one has access to multiple calls at once, the optimization of the quantum strategy can be done with SDP as well.Multiple-copy testers can take different forms, describing different classes of quantum strategies, such as parallel (non-adaptive) and sequential (adaptive), or even those involving an indefinite causal order.Such testers have been defined in Refs.[41,42] and explored in a frequentist approach to metrology [44].These techniques can also be applied to multiple-copy Bayesian estimation protocols, and be exploited to investigate, for example, whether different classes of estimation strategies can lead to higher precision in the parameter estimation.Similarly, strategies with an indefinite causal-order could lead to establishing new types of metrological resources.
While in the multi-shot scenario one can find the global optimal protocol as explained above, it can be hard to implement in practice, due to the exponential growth of of Hilbert space dimension.Alternatively, one can seek greedy optimal algorithms [69].In such a strategy, one would (i) perform the optimisation protocol as subscribed in our work, (ii) update the prior distribution to the posterior distribution based on the outcome and repeat steps (i) and (ii) until all shots are consumed.Despite not being necessarily globally optimal, this strategy can be very strong and has shown to be asymptotically optimal in some cases [70].An example of our interest is in Bayesian equilibrium thermometry [68].Regardless of global optimality, it is practically easier to implement such multi-shot strategies since the required operations do not involve exponentially increasing Hilbert spaces.
Applications to complex noise models.One of the key advantages of our approach is its versatility in handling various types of parameter-encoding dynamics.Often sensing methods are limited to specific parameterencoding channels, however, our approach can effectively model and accommodate any type of dynamics and address different types of noise.Specifically, if one has a good description of the noise appearing in the measurement process, one can simply incorporate this noise into the encoding channel and compute optimal testers and optimal estimators according to any one of the three methods.Therefore, applying these techniques to real (noisy) experimental settings to infer their actual performance would be an interesting next step.
Quantum metrology techniques for asymptotic quantum channel discrimination.In this study, we utilized higherorder operations, a technique previously used to study channel discrimination, to provide a nearly optimal solution for the quantum parameter estimation problem.It would be interesting to explore whether the reverse approach could also yield novel insights into the field of channel discrimination.Specifically, one could investigate whether leveraging asymptotic theoretical results from quantum metrology, such as the Heisenberg scaling, can contribute to the investigation of asymptotic quantum channel discrimination.This direction holds promise for gaining a deeper understanding of the relationship between channel discrimination and quantum metrology.
Connections with the multi-hypothesis testing problem.The discretization of the parameter space that we perform suggests that the Bayesian estimation problem can be connected with a multi-hypothesis testing problem [71].However, it should be noted that this connection is only partial.Indeed, our work exploits the fact that Bayesian estimation can be seen as a multi-hypothesis testing problem with (i) a continuous set of hypotheses and with (ii) a specific geometry on the "hypothesis space" as captured by the cost function.Still, we believe that the methods developed here (Method 1 in particular) could be potentially useful for determining bounds on the errors for the multi-hypothesis testing problem.Another interesting question is whether some of the bounds on error probabilities arising in the multi-hypothesis testing scenario could be also applicable in the Bayesian setting.
All code developed for this work is freely available in our online repository [46].
Appendix B: Convergence of the approximations
The cornerstone of our results is the discretization of the estimators and the hypotheses.The intuition suggests that as the discretization is made finer, the approximation becomes more precise and converges to the exact value.Here, we make this statement more rigorous.We focus on a score function that needs to be maximised; for those that require minimization a similar argument holds.
First, let us denote the optimal protocol by {{T * i }, { θ * i }}, it maximizes the score in Eq. ( 17) to it's optimal value S * .We know that the optimal protocol has at most D := (d I × d O ) 2 elements, therefore Now consider another protocol in which the estimators are fixed to { θi } N O i=1 , and the tester { Ti } N O i=1 is the solution of the SDP maximizing the score for the given estimators The protocol achieves a certain score where ϵ i quantifies how different these values are.For simplicity we also introduce ϵ := max i |ϵ i |.Note that for concreteness we here used the absolute value of the difference | θi − θ * k | as a distance between the estimated values, however any other distance d( θi , θ * k ) could be used here and below to define θ * k and ϵ instead (e.g. in the multiparameter case).The new values { θ * i } allow us to define the protocol {{T * i }, { θ * i }}, where the tester are taken from the optimal protocol but the estimators have been modified.It achieves a certain score Here, we used the fact that by construction { θ * k } D i=1 form a subset of { θi } N O i=1 , therefore the maximization over the tester {T i } N O i=1 includes the maximization over the tester {T i } D i=1 (some of the tester elements can be identically zero).
Our next goal is to bound the deviation between D i=1 tr X( θ * i ) T * i and the optimal score S * , which are obtained with the same tester.To do so we recall their Bayesian interpretation in terms of posterior parameter distribution in Eq. ( 4) where each expected values E (i) [•] is taken with respect to the probability distribution p(θ|i).This allows us to write Here, it is intuitively clear that for nearby value θ * i and θ * i the expected values E (i) [r(θ, θ * i )] and E (i) [r(θ, θ * i )] will also be close, provided that the reward function r is regular enough.For simplicity let us now assume that it is Lipschitz continuous, i.e., for any small enough ϵ ≤ δ (here K r might depend on delta) which directly implies S * ≥ S * − K r ϵ.Finally, for a scalar parameter the N O estimators { θi } can be chosen such that ϵ ≤ L N O , where L is some constant depending on the prior.This would guarantee the convergence to the optimal score with 1.The case of reward functions that are not Lipschitz continuous Notably, Lipschitz continuity of the reward function is not necessary to guarantee the convergence of the score S * → S * .However, in such cases it seems difficult to make a general statement, which might furthermore require to assume some regularity of the prior.Nevertheless, for illustration let us consider a piece-wise constant reward function that can be used to define a confidence interval for the parameter.This reward function coincides with the recent proposal in Ref. [40].This function is manifestly discontinuous, with r(θ, θ * i ) − r(θ, θ * i ) taking the value +1 on an interval θ ∈ I i + of width | θ * i − θ * i | ≤ ϵ, the value −1 on another interval of the same width, and is otherwise zero.From Eq. (B9) we then find where each probability Pr (i) [f (θ)] = dθf (θ)p(θ|i) is taken over the conditional distribution p(θ|i).Defining the union of all the intervals I + = ∪ D i=1 I i + we can further upper bound the score difference with where in the last term the probability is taken over the prior distribution p(θ) = i p(i)p(θ|i), and we used In this case the score function is a cost which has to be minimized, so to match to the notation with the previous section we consider maximization of r(θ, θ) = −(θ − θ) 2 .We have where ϵ i is defined in Eq. (B6).Plugging this in the Eq.(B9) one gets But for the MSE we know that the optimal estimator is the mean, i.e. θ * i = E (i) [θ] = dθ θp(θ|i).Therefore, the second term is zero and we find with ϵ = max i |ϵ i |.
3. Convergence for the cos 2 reward function In the phase estimation problem we considered a reward function that reads r(θ, θ * i ) = cos 2 θ− θ * i 2 . First of all, note that in this case the optimal estimator can be find in a closed form.To do so, we first rewrite the score for the posterior distribution p(θ|i) as E (i) cos 2 θ − θ * Impose that the derivative of the score with respect to the estimator θ * i is zero Which is equivalent to and admits two solutions θ * i = arctan ⟨sin(θ)⟩ (i) ⟨cos(θ)⟩ (i) or θ * i = arctan ⟨sin(θ)⟩ (i) ⟨cos(θ)⟩ (i) + π, (B25) with the notation from the main text.We then need to pick the value which gives the highest contribution to the reward in Eq. (B22).In fact, up to a constant the reward is the scalar product between the vectors (⟨cos(θ)⟩ (i) , ⟨sin(θ)⟩ (i) ) and (cos θ * i , sin θ * i ), so it's maximum is attained when the two vectors are in the same half of the disc.Since the range of arctan ∈ [− π 2 , π 2 ] corresponds to positive cosine, the choice of the optimal estimator solution depends on the sign of ⟨cos(θ)⟩ (i) θ * i = arctan ⟨sin(θ)⟩ (i) ⟨cos(θ)⟩ (i) ⟨cos(θ)⟩ (i) ≥ 0, arctan ⟨sin(θ)⟩ (i) ⟨cos(θ)⟩ (i) + π otherwise.
where in the penultimate line we use the optimality criterion Eq. (B23), and used the definition of ϵ i in Eq. (B6), and ϵ = max i |ϵ i |.Here, we provide some technical details for the Example B where we want to estimate the temperature of a bosonic bath.A qubit that is initially prepared in the state ρ p (0) = Remark.-Theonly temperature dependence comes from N B/F .In particular, the Hamiltonian term is independent of (θ) and thus can be ignored.Then the optimal solution for this problem should be rotated with the same Hamiltonian in order to compensate for it.As such, we can ignore the phases in the off-diagonal terms above.
Using the expected mean logarithmic error as a cost function In the main text, we took the MSE as our figure of merit.However, in recent years, an alternative cost function has been put forward for thermometry, which is motivated by scale invariance [67].This is the so called expected mean square logarithmic error (EMSLE) at the kernel of which lies the following reward function r(θ, θ * i ) = log 2 ( θi /θ), which can be analytically solved to find the optimal estimator as [67] θ * i = exp dθp(θ|i) log(θ) .(D4) Interestingly, for this cost function, one can also prove that the optimal POVM is in fact a PVM [66].Our results straightforwardly apply to such figure of merit.We showcase this by reproducing our Figs.3, 4, and 5.These are depicted here in the three panels of Fig. 8, respectively from top to bottom.The fact that PVMs are optimal is reflected in the middle panel, where our method M3 is optimal with only N O = 4 outcomes.
iFigure 1 .
Figure 1.Strategy for Bayesian parameter estimation.The left panel represents the prior probability distribution of the parameter θ encoded in the channel E θ .The center panel shows a single-shot strategy of parameter estimation in which part of a quantum state ρ is sent through the channel E θ and then measured by POVM {Mi}, yielding a classical outcome i.The right panel then represents the posterior probability distribution of the parameter θ, conditioned on the obtained measurement outcome i.
Figure 2 .
Figure 2. Local phase estimation (Example 1).The maximum approximate score S in a local n = 2 qubit phase estimation problem.Each panel shows the scores corresponding to Methods M1, M2, and M3 as a function of the number of outcomes NO ∈ {2, . . ., 10} for different prior distributions of the local phase: panel (a) corresponds to the case of uniform prior, while (b) corresponds to a Gaussian prior.The phase parameter ranges from θmin = 0 to θmax = 2π.The considered cost function is the cosine squared in Eq. (31).
Figure 3 .
Figure 3. Thermometry (Example 2).The minimum approximate score S in the finite-time temperature estimation problem S, renormalized by the maximum value of S in the plot.The temperature θ is encoded via a qubit non-unitary evolution specified by Eq. (38) acting for an amount of time t < ∞, here shown for a fixed time t = 0.05.The plot shows the different scores corresponding to Methods M1, M2, and M3 as a function of the number of outcomes NO ∈ {2, . . ., 10} for a uniformly distributed prior in a temperature parameter range of θmin = 0.1, θmax = 2.The considered cost function is the MSE in Eq.(39).The remaining parameters chosen are ϵ = 0.1 and J(ϵ) = 2.
Figure 5 .
Figure 5. Thermometry (Example 2): Advantage of entanglement in the transient regime.The main panelshows the approximate score S (computed via Method 3) as a function of the evolution time t for a fixed number of outcomes NO = 4.The parameters chosen are as in Fig.3.All values are renormalized by the maximum value of S in the plot.The inset shows the same curves plotted in a log-log scale.We see that there are times t for which the precision of estimation is better than in the steady state t → ∞.This can be understood as an advantage arising from having entanglement with the probe: the entanglement allows the transfer of the information about the parameter into the memory system which is itself not subject to the dephasing dynamics of the master equation.As a consequence, measuring the entangled probe and memory systems before the joint system thermalizes provides a significant advantage.
Figure 6 .
Figure 6.Thermometry (Example 2): Optimal state.The entanglement of the optimal initial state ρ = |Ψ⟩⟨Ψ| IA in Eq. (41) as function of time found by method 3 for NO = 4.The corresponding score S is given in Fig.4, the physical parameters are given in Fig.3.
Figure 7 .
Figure 7. SU(2) estimation (Example 3).The maximum approximate score S for an SU(2) multiparameter estimation problem.Both panels show the scores corresponding to Methods M1, M2, and M3 as a function of the cubic root of the total number of outcomes N 1/3 O ∈ {2, . . ., 10}, for different prior distributions of the phase parameters (θ x , θ y , θ z ).Panel (a) shows the case of a uniform prior while (b) corresponds to a Gaussian prior.Each of the three parameters ranges from θmin = −π to θmax = π.The considered cost function is the fidelity in Eq. (43).
1 / 3 O 3 O 3 O 1 / 3 O = 10 ,
= 3, i.e. total number of outcomes N O = 27.This is consistent with the fact that, in this case, extremal testers have at most(d I × d O ) 2 =16 outcomes, and hence, for N 1/= 2 (total number of outcomes N O = 8) we are not yet optimizing over all possible extremal testers.This is an interesting case where extremal non-projective POVMs with (d I × d O ) 2 outcomes show improvement over (d I × d O )outcome PVMs.Method 2 quickly approaches this same value, while Method 1 requires higher values of N 1/.Nevertheless, as expected, for a larger number of outcomes, namely N all methods yield the same result.
Appendix D :
Details of thermometry (Example 3)
Figure 8 .
Figure 8.The thermometry problem seen from the perspective of the EMSLE as the cost function.The top, middle and low panels correspond to the Figs. 3, 4, and 5 of the main text, respectively-note the logarithmic scaling in the middle and bottom figures.All other parameters are kept the same as the corresponding graphs in the main text.
4outcomes.The approximate score S is computed using Method 3 as a function of time t for different values of NO.The parameters are chosen as in Fig.3.The inset plot is a log-log plot of the same curves.All values are renormalized by the maximum value of S in the plot.Since the cost function is the MSE, in this case projective measurements (which have at most dI × dO outcomes) are optimal.Indeed, we observe that increasing NO beyond 4 does not change the value of the score. | 16,112.6 | 2023-11-02T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Andrographolide Protects against Aortic Banding-Induced Experimental Cardiac Hypertrophy by Inhibiting MAPKs Signaling
Despite therapeutic advances, heart failure-related mortality rates remain high. Therefore, understanding the pathophysiological mechanisms involved in the remodeling process is crucial for the development of new therapeutic strategies. Andrographolide (Andr), a botanical compound, has potent cardio-protective effects due to its ability to inhibit mitogen-activated protein kinases (MAPKs). Andr has also been shown to inhibit inflammation and apoptosis, which are factors related to cardiac hypertrophy. Our aim was to evaluate the effects of Andr on cardiac hypertrophy and MAPKs activation. Thus, mice were subjected to aortic banding (AB) with/without Andr administration (25 mg/kg/day, orally). Cardiac function was accessed by echocardiography and hemodynamic parameters. Our results showed that Andr administration for 7 weeks decreased cardiac dysfunction and attenuated cardiac hypertrophy and fibrosis in AB mice. Andr treatment induced a strong reduction in the transcription of both hypertrophy (ANP, BNP, and β-MHC) and fibrosis related genes (collagen I, collagen III, CTGF, and TGFβ). In addition, cardiomyocytes treated with Andr showed a reduced hypertrophic response to angiotensin II. Andr significantly inhibited MAPKs activation in both mouse hearts and cardiomyocytes. Treatment with a combination of MAPKs activators abolished the protective effects of Andr in cardiomyocytes. Furthermore, we found that Andr also inhibited the activation of cardiac fibroblasts via the MAPKs pathway, which was confirmed by the application of MAPKs inhibitors. In conclusion, Andr was found to confer a protective effect against experimental cardiac hypertrophy in mice, suggesting its potential as a novel therapeutic drug for pathological cardiac hypertrophy.
INTRODUCTION
Heart failure (HF) is a burgeoning problem that affects more than 20 million individuals worldwide (Tham et al., 2015). Cardiac hypertrophy is a response of the heart to increased workload (such as aortic stenosis, hypertension, and dilated cardiomyopathy) and various insults (such as myocarditis and myocardial infarction). It typically progresses to heart failure (Lyon et al., 2015;Shimizu and Minamino, 2016). Increased cardiomyocyte size and thickening of ventricular walls are the main features of cardiac hypertrophy (Lyon et al., 2015;Shimizu and Minamino, 2016). Maladaptive hypertrophy involves increased cardiomyocyte hypertrophy and, apoptosis as well as increased fibroblast activation, which are associated with reduced systolic function and increased heart stiffness (Samak et al., 2016;Shimizu and Minamino, 2016). Left ventricular hypertrophy is positively correlated with an increased risk of adverse cardiovascular events (Hill and Olson, 2008). Accumulating evidence indicates that multiple signaling pathways participate in the progression of cardiac hypertrophy, such as insulin growth factor/PI3K/Akt, protein kinase C, and mitogen-activated protein kinases (MAPKs), β-adrenergic receptor, calcineurin/NFAT, and Ca 2+ /CaMKII signaling (Hou and Kang, 2012;Tham et al., 2015). However, the complexity of the mechanism underlying the transition from hypertrophic processes to heart failure and the difficulty in reversing cardiac hypertrophy contribute to the high mortality rates of heart failure. Therefore, new pharmacological agents that selectively inhibit the progression of cardiac hypertrophy are of great therapeutic interest.
MAPKs are involved in diverse biological events, including proliferation, differentiation, metabolism, motility, survival, and apoptosis (Liu and Molkentin, 2016). MAPKs subfamilies include ERK1/2, c-Jun NH2-terminal kinases (JNK), and P38 kinase (Liu and Molkentin, 2016). ERK1/2 signaling co-ordinates the eccentric and concentric growth of the heart. In addition, ERK autophosphorylation at Thr188 facilitates ERK1/2 activity toward nuclear targets, which is a critical event in the induction of ERKmediated cardiac hypertrophy in response to various stimuli (Lorenz et al., 2009;Li W.M. et al., 2017). JNK can shuttle between the cytoplasm and the nucleus to exert its effects. Overactivation of JNK leads to a hypertrophic phenotype, and abrogation of JNK activity attenuates endothelin-1 (ET-1) and the pressure overload-induced hypertrophic response (Li W.M. et al., 2017). P38 is rapidly activated within a few minutes of exposure to aortic pressure or volume overload. Cardiac-specific overexpression of P38 leads to enhanced cardiac hypertrophy in response to pressure overload (Liao et al., 2001). The complexity of the signaling transduction network makes it impossible and imprudent to label any molecule as definitively 'bad' or 'good.' Thus, by focusing on network interactions rather than individual signaling molecules, we have a better chance of influencing the outcome.
Andrographis paniculata is a traditional medicinal herb that is used in China (Banerjee et al., 2017). Andrographolide is the major bioactive component of Andrographis paniculata. To date, studies have reported many pharmacological effects of Andr, such as anti-inflammatory (Ren et al., 2016), anti-oxidant (Chen et al., 2014), antihyperglycemic , and hepatoprotective properties (Pan et al., 2017). Recent studies have found that Andr inhibits aconitine-induced arrhythmia by inhibiting voltagegated Na + (I Na ), and Ca 2+ (I CaL ) (Zeng et al., 2017). By inhibiting IκB phosphorylation and NF-κB activation, Andr relieves lipopolysaccharide-induced cardiac malfunctions in mice . Andr has also been reported to protect against hypoxia/reoxygenation injury in cardiomyocytes by regulating glutathione levels (Woo et al., 2008). Recently, Hsieh YL reported that Andrographis paniculata extract attenuates pathological cardiac hypertrophy and apoptosis in high-fat dietfed mice . All these findings suggest the cardiac protective effects of Andr. Andr was also reported to inhibit MAPKs in many disease models, including acute lung injury (Peng et al., 2016), rheumatoid arthritis (Li Z.Z. et al., 2017), Alzheimer's disease , and ischemic stroke models (Yen et al., 2016). These studies indicate that Andr may exert anti-hypertrophic effects by regulating MAPKs. Aortic banding (AB) is a reliable model of left ventricular pressure overload for the study of the progression from compensated hypertrophy to heart failure, thus enabling the monitoring of cardiac remodeling (Martin et al., 2012). It is known that pressure overload can activate the renin-angiotensin system and induce the release of angiotensin II (Ang II), which activates the Gα (q) protein-coupled receptor signaling pathway (Wu et al., 2015). Thus, Ang II was used in vitro to induce cardiac hypertrophy in cardiomyocytes. The aim of our study was to explore the effects of Andr on pressure overload-induced cardiac hypertrophy, and fibrosis as well as the underlying mechanisms.
Chemicals
Andrographolide was purchased from Shanghai Winberb Medical S&T Development Co. Ltd. (Shanghai, China) with a purity >98% as determinated by high-performance liquid chromatography analysis.
Animals
Eight to ten weeks old male C57/BL6 mice were purchased from the Institute of Laboratory Animal Science, CAMS&PUMC (Beijing, China). Mice were housed in the Cardiovascular Research Institute of Wuhan University (Wuhan, China) with controlled temperature and humidity. AB surgery was performed as previously described (Wu et al., 2015. After 1 week of AB or sham surgery, the animals were treated with Andr daily (25 mg/kg body weight/day, oral gavage, suspended in 0.5% carboxymethyl cellulose solution) until 8 weeks after surgery. Four groups were included: the vehicle-sham group (veh-sham, n = 15), the Andr-sham group (n = 15), the vehicle-AB group (veh-AB, n = 15), and the Andr-AB group (n = 15). All the experimental procedures were in accordance with the institutional guidelines and approved by the Animal Care and
Echocardiography
Cardiac functions were measured in our laboratory (Wu et al., 2015. Briefly, echocardiography was performed on anesthetized (1.5% isoflurane) mice using a MyLab 30CV ultrasound system (Biosound Esaote, Genoa, Italy) with a 10-MHz linear array ultrasound transducer. Parasternal short axis images were obtained at the level of the mid-papillary muscle in M-mode. Left ventricular (LV) dimensions from five consecutive cardiac cycles were measured and averaged, including LVEDs, LVEDd, end-diastolic LVPWd, and end-systolic LVPW (LVPWs). Fractional shortening (FS) and LV ejection fraction (EF) were calculated using the LVEDs and LVEDd values.
Measurement of Hemodynamic Parameters
Hemodynamic parameters were measured in our laboratory (Wu et al., 2015. Briefly, hemodynamics were measured in anesthetized (1.5% isoflurane) mice using cardiac catheterization. A microtip catheter transducer (SPR-839; Millar Instruments, Houston, TX, United States) was inserted into the right carotid artery and advanced into the LV. Data including HR, enddiastolic pressure (EDP), end-systolic pressure (ESP), dP/dt max, and dP/dt min were analyzed.
Histological Analysis
Heart slides were obtained using previously described methods (Wu et al., 2015. HE staining was performed to assess the cardiomyocyte cross-sectional area (CSA) and to observe the morphology of striated muscle. Sirius red in saturated picric acid (PSR) staining was used to determine interstitial fibrosis. A quantitative digital image analysis system (Image-Pro Plus, version 6.0; Media Cybernetics, Rockville, MD, United States) was used to trace a single myocyte (100-200 myocytes in each group).
Quantitative Real-Time Polymerase Chain Reaction (RT-PCR)
Total RNA and cDNA were prepared previously described (Wu et al., 2015.
We performed 20-µl reactions according to the manufacturer's protocol with the following cycling parameters: 95 • C for 5 min; 45 cycles of 95 • C for 10 s, 60 • C for 10 s, and 72 • C for 10 s; 95 • C for 5 s; 60 • C for 1 min; 97 • C for 0.11 s; and 40 • C for 10 min. The results were analyzed with the 2 − Ct method and normalized to GAPDH gene expression. The primer sequences used in the RT-PCR experiment are listed in Table 1.
Cell Culture
H9c2 cells were prepared according to our laboratory's protocols (Wu et al., 2015). The H9c2 cardiomyocytes were obtained from the Cell Bank of the Chinese Academy of Sciences, in Shanghai, China and cultured in DMEM (C11995; GIBCO, Thermo Fisher, Waltham, MA, United States) supplemented with 10% fetal bovine serum (10099; GIBCO) in an atmosphere containing 5% CO 2 inside a humidified incubator (SANYO 18 M, Osaka, Japan) at 37 • C. The cells were divided into four groups: the control group (CON), the Ang II (1 µM, Sigma) treatment group, the Andr (12.5, 25, or 50 µM) group, and the Ang+Andr (12.5, 25, or 50 µM) group. After treatment for 24 h, cells from six wells were harvested for PCR analysis, while cells from 24 wells were used for immunofluorescence staining.
Isolation and Culture of Cardiac Fibroblasts
Neonatal rat cardiac fibroblasts were prepared to our laboratory's protocols (Wu et al., 2017a). Briefly, neonatal rats that were born within 3 days were sacrificed, and the hearts were collected. The hearts were cut into 1-mm 3 tissue and digested in 0.125% trypsin for 15 min at 34 • C for a total of five times. The digestive fluid was collected and centrifuged. Cells were resuspended, filtered and then seeded onto 100-mm plates for 90 min. After removing the cardiomyocytes, the cardiac fibroblasts were cultured in DMEM/F12 containing 10% FBS at 37 • C in a humidified incubator with 5% CO2. Before treatment with Ang II (1 µM) and Andr (12.5, 25, or 50 µM), the cells were cultured in 1% FBS for 12 h. After treatment for 24 h cells from six wells were harvested for PCR analysis while cells from 24 wells were used for immunofluorescence staining.
Cell Counting Kit-8 Assay
Cell viability was evaluated using the cell counting kit (CCK)-8 assay, according to the manufacturer's instructions. Briefly, 10 µl of CCK-8 solution was added to each well of a 96-well plate, and the absorbance was measured at 450 nm using an ELISA reader (Synergy HT, Bio-tek, Winooski, VT, United States) after a 4-h incubation. The effect of Andr on cell viability was expressed as the percentage of viable cells compared with that in the vehicle group, which was set at 100%.
Immunofluorescence
Culture medium was discarded, and the cells were washed with PBS; then, 4% paraformaldehyde was used to fix the cells for 15 min at room temperature. After rinsing with PBS, the cells were permeabilized with 0.5% Triton X-100 and then rinsed with PBS three times for 5 min each. Primary antibodies against α-actinin (Millipore, 2207266, Darmstadt, Germany) and α-SMA (Abcam, ab7817) at a 1:100 dilution were added to H9c2 cells and cardiac fibroblasts, respectively, in a 24-well plate at 4 • C overnight. The next day, sections were washed with PBS and then incubated with Alexa Fluor R 488 goat anti-mouse IgG (H+L) antibodies for 60 min at 37 • C. Finally, after washing with PBS, Slow Fade Gold antifade reagent with DAPI was used to seal the sections before observation and imaging with a fluorescence microscope. Image-Pro Plus 6.0 was used to analyze the images (n = 5 samples per group and n = 100+ cells were analyzed per group).
Statistical Analysis
All values were presented as the mean ± SEM. SPSS 19.0 for Windows was used for the analysis. One-way ANOVA followed by post hoc Tukey test was performed for the data analysis. Twoway ANOVA followed by post hoc Tukey test was performed to evaluate the time-dependent anti-hypertrophic effects of Andr in vitro. A p-value < 0.05 (two-tailed) was considered statistically significant.
Andr Improves Cardiac Function after Chronic Pressure Overload in Mice
To evaluate the effect of Andr on cardiac function, echocardiography was performed 8 weeks after AB ( Figure 1A). The results showed that wall thickness (LVPWs, LVPWd, Figure 1B), chamber dilation (LVEDd, Figure 1C), and cardiac function (increased LVEF and LVFS, Figure 1D) were in the Andr-AB group than in the veh-AB group. There was no significant difference in HR among groups ( Figure 1E). In addition, hemodynamics showed that Andr treatment improved systolic function and diastolic function after AB surgery (Figures 1F-H). No obvious differences were observed between the sham-vehicle and Andr mice.
Andr Attenuates Cardiac Hypertrophy after Chronic Pressure Overload in Mice
Eight weeks after AB, Andr-treated mice presented decreased heart weight, which was accessed by heart weight to body weight ratios and heart weight to tibia length ratios (Figures 2A,B). Andr-treated mice also exhibited a strikingly decreased heart size and decreased cardiomyocyte CSA compared to vehicle treated animals (Figures 2C,D). Consistently, the expression of hypertrophic genes was significantly decreased in Andr-treated mice after AB ( Figure 2E). Thus, Andr remarkably protected the heart from pressure overload-induced cardiac hypertrophy and dysfunction.
Andr Attenuated Cardiac Fibrosis after Chronic Pressure Overload in Mice
Cardiac fibrosis, one of the main features of cardiac hypertrophy, was evaluated by PSR staining. After 8 weeks of AB, dramatic interstitial fibrosis was observed in all AB mice, but the extent of fibrosis was decreased in the hearts from the Andr-AB group (Figures 3A,B). The expression of myocardial pro-fibrotic genes was also down-regulated by Andr treatment (Figure 3C). Hemodynamics analysis was performed at the end of the study (8 weeks) (n = 8). HR, heart rate; ESP, end-systolic pressure; EDP, end-diastolic pressure; CO, cardiac output; dP/dt max, maximal rate of pressure development; dP/dt min, minimal rate of pressure decay. * P < 0.05 compared with the corresponding sham group. # P < 0.05 vs. the veh-AB group. AB, aortic banding.
Frontiers in Pharmacology | www.frontiersin.org The mRNA expression of CTGF, collagen I, collagen III, fibronectin, and TGFβ1 in the myocardium was analyzed in the indicated groups using reverse transcription-polymerase chain reaction (n = 6). The results are presented as a fold change, and the results are normalized to GAPDH gene expression. * P < 0.05 compared with the corresponding sham group. # P < 0.05 vs. the veh-AB group.
Andr Suppresses Ang II Induced Cardiomyocyte Hypertrophy
To further determine whether Andr could attenuate cardiomyocyte hypertrophy, we treated cardiomyocytes with different concentrations of Andr (0, 12.5, 25, or 50 µM) and stimulated them with Ang II for 24 h. The cell counting kit assay-8 assay revealed that Andr (12.5, 25, or 50 µM) treatment did not affect cardiomyocyte viability ( Figure 4A). Compared with the Ang II group, Andr treatment dramatically blunted the prohypertrophic effect of Ang II in a dose-dependent manner as indicated by the cell surface area, and expression of fetal genes (Figures 4B-D). Thus, 50 µM Andr was used in subsequent experiments.. After stimulating with Ang II for 6, 12, and 24 h, the cardiomyocytes exhibited gradually increased fetal gene expression, but Andr treatment nearly abolished this increase in the hypertrophic response ( Figure 4E). These results indicate an important anti-hypertrophic role for Andr in cardiomyocytes.
Andr Blocks MAPKs Signaling in Vivo and in Vitro
We further gained insight into the molecular events mediating the anti-hypertrophic effect of Andr. Andr has been reported to exert its anti-inflammatory (Ren et al., 2016), anti-oxidant (Chen et al., 2014), antihyperglycemic , and hepatoprotective properties (Pan et al., 2017) by regulating MAPKs signaling. Thus, we detected the protein expression of MAKPs. We found that MAPKs activation, including ERK1/2, JNK, and P38, was induced by pressure overload, but was strongly down-regulated in Andr-treated mice after 8 weeks of AB (Figures 5A,B). In line with this in vivo data, treatment cardiomyocytes with Andr resulted in dephosphorylation of ERK1/2, JNK, and P38 after Ang II stimulation (Figures 5C,D). The results imply that MAPKs may mediate the anti-hypertrophic effects of Andr.
Andr-Mediated Cardioprotection Depends on the Inhibition of MAPKs in Cardiomyocytes
The effect of Andr on MAPKs was further confirmed by MAPKs inhibitors. Cardiomyocytes were treated with ERK1/2 inhibitor (SCH772984, 5 µM, Selleck), JNK inhibitor (SP600125, 10 µM, Sigma), and/or P38 inhibitor (SB209063, 10 µM, Medchem Express) as well as stimulated with Ang II. These inhibitors did not affect cardiomyocyte cell viability as shown in Figure 6A. None of these inhibitors could elicit a antihypertrophic response that was comparable to Andr when applied alone, but treatment with a combination of these three inhibitors did achieve a similar anti-hypertrophic response. Andr treatment could further augment the anti-hypertrophic effect of each inhibitor, as shown by the augmented reduction in surface area and fetal gene expression (Figures 6B-J). These findings suggest that suppression of all members of MAPKs (ERK1/2, JNK, and P38), underlies the anti-hypertrophic effects of Andr on cardiomyocytes.
Andr Reduces Cardiac Fibroblast Activation via MAPKs
Andr treatment ameliorated cardiac fibrosis at 8 weeks after AB in mice (shown in Figure 3). As cardiac fibroblasts play a major role in cardiac fibrosis, we then investigated whether Andr affects fibroblast activation. Cell counting kit-8 assay revealed that Andr (12.5, 25, or 50 µM) treatment did not affect fibroblast viability ( Figure 7A). Ang II-induced fibroblast activation, proliferation, and function were inhibited by Andr treatment in a dosedependent manner (Figures 7A-D). Andr also inhibited Ang II-induced MAPKs activation (including ERK1/2, JNK, and FIGURE 6 | Andr-mediated cardioprotection depends on the inhibition of MAPKs in cardiomyocytes. Cardiomyocytes were treated with ERK1/2 inhibitor (SCH7729, 5 µM), JNK inhibitor (SP600125, 10 µM), and/or P38 inhibitor (SB209063, 10 µM) as well as stimulated with Ang II (1 µM) and treated with Andr (50 µM). (A) Cell viability was accessed by the cell counting kit-8 assay (n = 5). (B-E) Immunofluorescence staining of α-actinin and the cell surface area of cardiomyocytes in the indicated groups (n = 5 samples and n = 100+ cells per group). (F-J) The mRNA levels of ANP and β-MHC in cardiomyocytes in the indicated groups (n = 6). The results are presented as a fold change, and the results are normalized to GAPDH gene expression. * P < 0.05 compared with the control group. # P < 0.05 vs. the Ang II group. (Figures 7E,F), which was confirmed by ERK1/2 inhibitor, JNK inhibitor, and P38 inhibitor treatment (Figures 7H-K). However, Andr did not affect Ang II-induced smad4 expression (Figures 7E,F). Fibroblast viability was neither affected by single MAPKs inhibitor treatment nor treatment with a combination of MAPKs inhibitors ( Figure 7G). These data suggest that the suppression of MAPKs in cardiac fibroblasts contributes to the anti-fibrosis effect of Andr on fibroblasts.
DISCUSSION
Pressure overload induces cardiomyocyte hypertrophy and fibroblast activation in cardiac tissue, resulting in cardiac hypertrophy and fibrosis followed by cardiac dysfunction in both human failure hearts (Gjesdal et al., 2011) and mouse models (Wu et al., 2015). Andr pharmacologically attenuated cardiac hypertrophy and fibrosis in vivo. In addition, our study also demonstrated that Andr ameliorated the Ang II-induced hypertrophic response in myocytes in vitro, as well as alleviated Ang II-induced fibroblast activation, proliferation, and function. Regarding the mechanism, we found that Andr suppressed the activation of ERK1/2, JNK, and P38 in both cardiomyocytes and fibroblasts ameliorating cardiac hypertrophy and fibrosis in the heart and improving cardiac function.
The mechanisms underlying the cardio-protective effects of Andr are not clear. In human patients with heart failure, MAPK signaling proteins have been reported to be hyperactive (Haque and Wang, 2017). Under physiological conditions MAPKs regulate cell proliferation and differentiation, whereas under pathological conditions, activated MAPKs induce the hypertrophic gene transcription (Liu and Molkentin, 2016). The downstream targets of MAPKs kinases include P38 kinase, JNK, and ERK1/2. Studies have shown that constitutive activation of ERK1/2 kinase contributes to concentric hypertrophy in cardiomyocytes (Kehat and Molkentin, 2010); ERK1/2 act upon nuclear factor of activated T-cells to mediate cardiac hypertrophy (Molkentin, 2013). JNK activation results in increased mitochondrion-associated apoptosis and fibrosis in the heart (Rose et al., 2010). Activation of JNK contributes to restrictive cardiomyopathy and promotes fibrosis in the heart (Petrich et al., 2004). P38 is also involved in myocyte growth. P38 inhibitors (SB203580 or SB202190) suppressed hypertrophic stimuli induced myocyte growth and dominant negative P38 delivered by adenoviruses (Zechner et al., 1997;Liang and Molkentin, 2003). Our previous studies have reported that many molecular and plant extracts ameliorate pressure overload-induced cardiac hypertrophy via MAPKs inhibition (Wu et al., 2017b). ATF3 exerts negative feedback on the ERK and JNK pathways to modulate cardiac remodeling (Zhou et al., 2011). Mnk1 prevents cardiac hypertrophy by inhibiting the Ras/ERK pathway . Other plant-derived inhibitors, such as geniposide, indole-3-carbinol, and baicalein, also target on MAPKs signaling proteins (Wu et al., 2017b). These findings do not indicate whether a particular molecule is definitively 'bad' or 'good.' By focusing on network interactions rather than signal molecules, we may have a better chance at influencing the outcome. Previous reports have indicated that Andr exerts various effects by regulating MAPKs. Andr ameliorates rheumatoid arthritis by inhibiting MAPKs pathways (Li Z.Z. et al., 2017). Andr protects against ischemic stroke in rats by regulating the MAPKs signaling cascade (Yen et al., 2016). Andr reduces IL-2 production in T-cells by interfering with NFAT and ERK activation (Burgos et al., 2005;Carretta et al., 2009). Conversely, Andr induces Nrf2 and HO-1 in astrocytes by activating p38 and ERK (Wong et al., 2016). Andr inhibits the growth of human T-cell acute lymphoblastic leukemia Jurkat cells by upregulating of P38 pathways . These results indicate that Andr differentially regulates MAPKs in different cell types. Our results revealed that Andr inhibits the three terminal effectors of the MAPKs signaling cascade following induction by hypertrophic stimuli in both cardiomyocytes and fibroblasts. Using ERK1/2, JNK, and P38 inhibitors, we further demonstrated that Andr exerts cardio-protective effects by inhibiting both ERK, JNK, and P38. Andr exhibits similar inhibition efficiency for ERK, JNK, and P38. As our result in Figure 6 shows, single MAPK protein inhibition exerts similar anti-hypertrophic effect. Andr equally augmented these anti-hypertrophic responses. Direct evidence of the inhibition efficiency of Andr for ERK, JNK, and P38 requires further study.
Cardiac fibrosis is a major feature of hypertrophic cardiomyopathy and contributes to ventricular dysfunction and life-threatening arrhythmia (Travers et al., 2016). Our in vivo study showed that Andr treatment abated the pressure overloadinduced fibrotic response. Cardiac fibroblasts contribute to the heart's response to various forms of injury. After myocardial injury, the expression of various pro-fibrotic factors is up-regulated in fibroblasts, leading to increased fibroblast cell proliferation and ultimately, its transition to the myofibroblast phenotype (Kurose and Mangmool, 2016). Under these conditions, a subset of activated myofibroblasts acquire new phenotypic characteristics, including expression of the contractile protein α-SMA, and contribute to pathological cardiac remodeling (Tomasek et al., 2002). Considering the key role of fibroblasts in cardiac fibrosis, we wondered whether Andr could directly target fibroblasts. Our in vitro results showed that Andr ameliorates Ang II-induced fibroblast activation, proliferation, and function I. The AT1R mediates many effects of Ang II in fibroblasts, including cell proliferation, cell migration, and the induction of extracellular matrix protein synthesis (Gao et al., 2009). AT1R activation results in the G-proteindependent activation of MAPKs, which leads to the activation and expression of collagen in fibroblasts (Dostal et al., 2015). Ang II is also involved in TGFβ/smad signaling. Activation of AT1R by Ang II induces the expression TGF-β1 (Ma et al., 2012). We found that Andr suppressed the activation of MAPKs. MAPK inhibition by a combination of specific inhibitors exerted the same protective effects as Andr. These results indicate that the anti-fibrotic effect of Andr depends the inhibition of MAPKs. In contrast, Andr did not affect smad4 expression, indicating that smad signaling may not be a target of Andr.
In the in vivo study, 25 mg/kg/day Andr was used from 1 week after surgery to 8 weeks after surgery. A pharmacokinetic study reported that the blood concentration-time curve of Andr by oral gavage in rats (10 mg/kg) was fitted to the one-compartment model, in which the blood concentration of Andr increases sharply to 1.6 µg/mL in 100 min and gradually decreases within 600 min (Suo et al., 2007). The 25 mg/kg/day oral dose via gavage that was used in our study could maintain the blood Andr concentration at a certain level. However, accurate pharmacokinetic measurement in mice requires further study.
CONCLUSION
We documented the effective inhibition of the MAPKs signaling pathway by Andr treatment in both cardiomyocytes and fibroblasts and showed that MAPKs mediates the anti-hypertrophic effect of Andrin heart tissue. Cardiac hypertrophy due to stress, such as pressure overload, often culminates in heart failure and is associated with adverse cardiovascular events. Classical pharmacological treatment strategies for heart failure are ineffective in a number of patients. Although the effect of Andr on human cardiac hypertrophy and heart failure has not yet been reported, these observations in mice are critical for the development of treatment strategies for cardiac hypertrophy and heart failure. | 5,937.4 | 2017-11-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
Ensemble-Based Text Classification for Spam Detection
This research proposes an ensemble-based approach for spam detection in digital communication, addressing the escalating challenge posed by unsolicited messages, commonly known as spam. The exponential growth of online platforms has necessitated the development of effective information filtering systems to maintain security and efficiency. The proposed approach involves three main components: feature extraction, classifier selection, and decision fusion. The feature extraction techniques are word embedding, are explored to represent text messages effectively. Multiple classifiers, including RNN including LSTM and GRU are evaluated to identify the best performers for spam detection. By employing the ensemble model combines the strengths of individual classifiers to achieve higher accuracy, precision, and recall. The evaluation of the proposed approach utilizes widely accepted metrics on benchmark datasets, ensuring its generalizability and robustness. The experimental results demonstrate that the ensemble-based approach outperforms individual classifiers, offering an efficient solution for combatting spam messages. Integration of this approach into existing spam filtering systems can contribute to improved online communication, user experience, and enhanced cybersecurity, effectively mitigating the impact of spam in the digital landscape.
Introduction
The pervasive expansion of digital communication platforms has revolutionized global connectivity, enabling seamless information exchange and unprecedented interactivity [1].However, this unprecedented growth has also ushered in a persistent and escalating challenge: the proliferation of unsolicited and often malicious messages, commonly referred to as spam.These intrusive messages not only disrupt efficient communication but also pose substantial risks to the security and integrity of online interactions [2].Consequently, the development of effective spam detection mechanisms has become imperative to sustain the safety, efficiency, and user experience of digital communication channels.In response to the mounting threat of spam, this research introduces an innovative and comprehensive ensemblebased approach to spam detection.This approach addresses the intricate dynamics of spam identification by leveraging the collective power of diverse classifiers within a unified framework [3].In recognition of the exponential growth of online platforms, our research delves into the design and implementation of this ensemble-based approach, which encapsulates three fundamental components: feature extraction, classifier selection, and decision fusion.At the heart of our approach lies the adoption of advanced feature extraction techniques, specifically focusing on word embeddings [4].These techniques harness the semantic nuances of language to transform text messages into dense vector representations, enabling more effective spam detection [5].Concurrently, a spectrum of classifiers is meticulously evaluated, including state-of-the-art Recurrent Neural Networks (RNNs) encompassing Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures.This assessment seeks to identify the optimal combination of classifiers capable of discerning spam messages with unparalleled accuracy.A central tenet of our research revolves around the strategic amalgamation of individual classifier outputs through an ensemble model.This collaborative approach capitalizes on the inherent strengths of diverse classifiers, resulting in heightened accuracy, precision, and recall in spam detection [6].To gauge the efficacy of our proposed ensemble-based method, extensive experimentation is conducted using established metrics and benchmark datasets.The meticulous evaluation process ensures the generalizability and robustness of our approach across various contexts and data distributions.The culmination of our research showcases compelling evidence that the ensemble-based approach significantly surpasses the performance of individual classifiers in combating spam messages.By seamlessly integrating our approach into existing spam filtering systems, the digital landscape stands to benefit from improved communication, enhanced user experiences, and fortified cyber security.This research, spanning two comprehensive pages, embodies a significant stride towards mitigating the pervasive impact of spam in the contemporary digital realm.The contribution of the work is 1.The synthesis of recent literature reinforces the interdisciplinary nature of the proposed ensemble-based approach, harnessing the power of deep learning, ensemble methods, and contextawareness to mitigate the menace of spam in digital communication.
System model
The proposed approach holds significant potential for real-world applications, particularly in the domain of spam detection.In practical scenarios, the impact of this approach lies in its ability to enhance the accuracy and reliability of spam detection systems.By integrating diverse deep learning architectures, including AlexNet, VGG-16, ResNet-50, and an ensemble of Recurrent Neural Networks (Ens_RNN), the model gains the capability to capture both intricate visual features and temporal dependencies within the data.This combination addresses the multifaceted nature of spam, which often manifests in various forms, including image-based spam and evolving text patterns.One key improvement over existing spam detection system is the inherent flexibility of the ensemble approach.The combination of different neural network architectures allows for a more holistic understanding of the diverse characteristics of spam content.This flexibility is particularly beneficial in adapting to new and emerging spam patterns, ensuring the system remains robust against evolving spam techniques.The use of recurrent neural networks also contributes to improved detection accuracy in scenarios where sequential patterns or temporal dependencies play a crucial role, such as in the identification of phishing attempts or evolving spam campaigns.The novelty of our research lies in the thoughtful integration of both convolutional and recurrent neural network architectures within an ensemble framework.While ensemble methods themselves are not novel, the innovation in our approach lies in the effective combination of diverse models, each specialized in capturing specific aspects of spam content.This comprehensive approach enhances the overall performance of the system, demonstrating a nuanced understanding of the intricacies associated with spam detection.Furthermore, the explicit consideration of temporal dependencies through the use of an ensemble of recurrent neural networks represents a novel contribution, as it addresses a critical aspect often overlooked in traditional spam detection systems The proposed ensemble-based spam detection approach follows a straightforward and systematic workflow to effectively identify and block spam messages in digital communication.This approach involves several key stages: First, a diverse dataset containing both spam and legitimate messages is collected and cleaned.Irrelevant characters are removed, and messages are transformed into a format that computers can understand.This prepares the data for analysis.Next, different intelligent algorithms, referred to as "detectives," are selected and trained.These detectives learn from the dataset to recognize patterns that distinguish spam from legitimate messages.The detectives' decisions are then combined through a group decision-making process, similar to teamwork.If most detectives agree that a message is spam, the system is likely to classify it as such.Context and emotional cues are also considered by analyzing the situation, sender, and emotional tone of messages using sentiment analysis.This enhances the system's ability to differentiate between different types of messages.To ensure the system's effectiveness, regular testing and evaluation are performed to see how well the detectives and the group decision are performing.This helps identify areas of improvement and fine-tuning.Once the system proves effective, it can be integrated into email or messaging platforms.Continuous monitoring ensures that it remains up-to-date and adaptive to changing spam patterns.Feedback from users plays a vital role in refining the system.Mistakes made by the system, such as labelling a legitimate message as spam, are learned from and used to make the system smarter over time.The system's impact is assessed by measuring the number of spam messages detected and evaluating its overall accuracy.Findings are documented to share insights and contribute to the improvement of email and messaging systems.In essence, the ensemble-based spam detection approach combines data processing, intelligent analysis, teamwork among algorithms, context understanding, user feedback, and continuous improvement to create a robust and reliable defence against spam messages in digital communication.
A. Preprocessing
The initial phase of the project involves the collection and preparation of data, a critical step to ensure the effectiveness of the proposed ensemble-based spam detection approach.A diverse dataset encompassing both spam and legitimate text messages is carefully curated.These messages are manually labelled as either "spam" or "legitimate" to establish a reliable ground truth for model training and evaluation.The collected dataset undergoes a meticulous cleaning process, where noise, special characters, and irrelevant details are meticulously removed.To ensure consistent analysis, all text is converted to lowercase, and common words devoid of substantial meaning (stopwords) are excluded.Tokenization dissects the text into meaningful units, which can be words or even smaller subword components.A significant transformation occurs through word embeddings is Word2Vec, which convert words into numerical vectors that encapsulate their semantic essence.Finally, the dataset is split into distinct subsets: the training set serves as the educational foundation for the model, the validation set assists in parameter tuning, and the test set provides a final assessment of the model's capabilities.This comprehensive data collection and preprocessing phase lays a robust groundwork for subsequent stages, contributing to the overall accuracy and efficiency of the ensemble-based spam detection approach.
B. Tsallis entropy-based segmentation
Tsallis Entropy-based segmentation for text classification is a novel way to improve accuracy and resilience.A core notion for text data segmentation is Tsallis Entropy, an expanded version of entropy.This method uses the text's information dynamics and inconsistencies to better grasp its patterns.It divides text into meaningful parts that may represent distinct categories or themes.This methodological fusion may enhance text categorization by addressing the complexity and diversity of textual information.The combination of Tsallis Entropy-based segmentation with text categorization requires multiple phases.To maintain consistency, text data is preprocessed using tokenization, stopword removal, and stemming [12].It is then calculated for each section to show text linguistic characteristics.In text categorization, Tsallis Entropy helps identify linguistic patterns linked with various classes.Higher Tsallis Entropy values in some portions may suggest complexity or divergence, indicating unique content.This information helps classification algorithms choose a text segment category or label.
It may improve sentiment analysis, topic modelling, and content categorization accuracy and interpretability.The fundamental properties of Tsallis Entropy complement standard text categorization, enabling more nuanced and effective textual data processing.However, Shannon changed the definition of entropy to assess uncertainty based on the system's data content.Furthermore, it is ensured that the additive quality of the Shannon entropy as calculated by Using a general entropy construction and the numerous fractal notions, the Tsallis entropy is expanded to nonextensive module: where indicates the degree of non-extensiveness of the Tsallis variable, or entropic index, technique, and defines the quantity of likelihood of occurrence of the scheme.An entropic pseudo-additive rule converts the entropic scheme into an independent and identically distributed module: The Tsallis entropy may be carefully considered while determining the ideal threshold for a picture.Consider a grayscale picture with L levels in the range of a probability distribution.So, it is possible to achieve the Tsallis multilevel thresholding by The appropriate threshold for a picture might be selected by carefully taking into account the Tsallis entropy.
Consider that the likelihood distribution for a picture with L grey levels in the interval of {0, 1, . . ., L − 1} values with p i = p 0 , p 1 , … p L−1 .so, it is possible to achieve the Tsallis multilevel thresholding by
C. Non-linear data augmentation
Non-linear data augmentation is a sophisticated technique applied to enhance the performance and generalization ability of text categorization models.It involves creating new instances of text data by applying various non-linear transformations that preserve the inherent semantics and meaning of the original text [13].This approach aims to diversify the training data, making the model more robust and capable of handling variations in language usage and expression.
D. Ensemble feature extraction
Ensemble feature extraction utilizing Word2Vec embeds a sophisticated approach that amalgamates the strengths of ensemble methodologies with the semantic comprehension offered by Word2Vec's word embeddings.This amalgamation is designed to elevate the representation of textual data across a spectrum of natural language processing endeavors.The foundation of this process lies in Word2Vec's adeptness at transmuting words into dense, contextually informed vectors that encapsulate semantic relationships.The process unfolds as follows: Initially, the Word2Vec embeddings are derived through a pre-trained model, furnishing each word within the textual corpus with a high-dimensional vector reflective of its semantic essence.The innovation comes to fruition through an ensemble of diverse feature extraction methodologies applied to these embeddings.This ensemble encapsulates an array of extraction methods, encompassing techniques like averaging, weighted averaging, and stacking, among others.The outcome of this ensemble process is a tapestry of feature representations for each text fragment, each facet gleaned through a distinct extraction mechanism.During the classifier training phase, these manifold features serve as input.The classifiers are primed to address a spectrum of natural language processing objectives, be it sentiment analysis, text classification, or even named entity recognition.In the realm of prediction, the outputs of these classifiers conjoin through ensemble methodologies, materializing as either majority voting, weighted voting, or stacking.This aggregate decision-making draws upon the comprehensive viewpoints captured by the ensemble feature extraction process.The potency of ensemble feature extraction via Word2Vec burgeons from its ability to synergize the intricate semantic subtleties encapsulated by Word2Vec embeddings with the manifold vantage points fostered by ensemble strategies.This not only augments representation but also fortifies resilience, potentially culminating in heightened model performance and broader applicability.As with any advanced approach, considerations encompass computational demands and the imperative of meticulous hyperparameter calibration to unlock the full potential of this innovative amalgamation.The selection of classifiers and feature extraction techniques in this study was guided by a thoughtful consideration of their efficacy in addressing the complexities of the medical imaging datasets under investigation.AlexNet, VGG-16, and ResNet-50, renowned for their success in image classification tasks, were chosen for their ability to capture intricate features in medical images.Their deep and hierarchical architectures allow for the automatic extraction of relevant features without the need for manual engineering.Additionally, an ensemble of Recurrent Neural Networks (Ens_RNN) was introduced to capture temporal dependencies within the data, an essential consideration in medical time series.The ensemble approach was deemed appropriate to enhance model robustness, leveraging the diversity of the individual models.Regarding ensemble methods, a straightforward averaging approach was chosen for its simplicity and effectiveness in maintaining model diversity.While alternative strategies such as bagging and boosting were considered, the diverse nature of the chosen base models rendered more complex ensemble methods unnecessary.The decision-making process was guided by a desire for a transparent and interpretable methodology.To assess the performance of the models, a comprehensive set of metrics, including accuracy, precision, recall, specificity, false positive rate (FPR), and false negative rate (FNR), was employed.This choice was motivated by the nuanced nature of medical data, where different types of classification errors can have varying consequences.By articulating these methodological choices, this paper aim to provide clarity and transparency in our approach, facilitating a deeper understanding and reproducibility of the results.
E. Classification using ensemble RNN:
We suggest an ensemble approach that combines the LSTM, Bi-LSTM, and GRU deep learning architectures.LSTM-GRU classifier: This network solves the vanishing gradient issue by adding a second processor, known as a cell, that can judge whether the data is useful or not.Three gates-the input gate f t , the forgetting gate f t , and the output gate o t -are arranged in a cell.The cell functionality are defined as follows:
Performance analyses
In the context of ensemble-based text classification for spam detection is compared with SVM [14], RF [15], NB [16] with several performance metrics can be utilized to evaluate the effectiveness of the approach.These metrics provide insights into the model's accuracy, precision, recall, and its ability to handle different aspects of the classification task.
• Accuracy: The proportion of correctly classified messages out of the total messages in the dataset.It provides an overall measure of the model's correctness.
• Precision: The proportion of true positive predictions (correctly identified spam) out of all positive predictions (both true positives and false positives).Precision is particularly relevant when the cost of false positives is high.• Recall (Sensitivity): The proportion of true positive predictions out of all actual positive instances.
Recall is valuable when the cost of false negatives (missed spam) is a concern.• Specificity: The harmonic mean of precision and recall, providing a balanced measure of a model's performance.
A. Dataset description
The SpamDetectionDataset was collected from various online platforms, including social media, emails, and online forums.The dataset was curated to include a diverse range of text messages, encompassing both legitimate content and unsolicited messages commonly known as "spam."The dataset was compiled for the purpose of developing and evaluating an ensemble-based text classification approach for spam detection.
Conclusions
This research has introduced and demonstrated the efficacy of an ensemble-based approach for tackling the persistent and escalating challenge of spam detection in digital communication.As the online landscape continues to expand, the need for effective information filtering systems to safeguard security and optimize efficiency becomes increasingly critical.By focusing on three key components -feature extraction, classifier selection, and decision fusion -this approach has showcased a comprehensive and innovative strategy.Leveraging word embedding techniques, text messages are adeptly represented, forming the foundation for subsequent analysis.The meticulous evaluation of multiple classifiers, including advanced RNN models like LSTM and GRU, has enabled the identification of optimal performers.The culmination of these classifiers into an ensemble model capitalizes on their strengths, resulting in elevated accuracy, precision, and recall for spam detection.Through extensive experimentation and benchmarking on widely accepted datasets, the approach's robustness and applicability have been established.The ensemble-based technique consistently outperforms individual classifiers, offering a pragmatic solution to the challenge of spam messages.By seamlessly integrating this approach into existing spam filtering systems, a ripple effect of positive outcomes is anticipated.Enhanced online communication quality, improved user experiences, and heightened cyber security are all foreseeable benefits.As a collective result, the digital landscape stands to be significantly fortified against the intrusive and disruptive impact of spam.In a world where digital communication is central, the demonstrated effectiveness of this ensemble-based approach signifies a promising step towards safer, more efficient, and user-centric online interactions.Future work in this domain may further refine and extend the approach, continuing to bolster the fight against the everevolving threat of spam.
. The work flow of the classification of text classification is shown in Fig 1.
Figure 1 :
Figure 1: work flow text classification
Table 1 :
Literature contributions to spam detection and classification (Devlin et al., 2019) the field of text classification, there have been several related works that focus on improving accuracy and performance.Some notable studies include: The literature survey encapsulates the burgeoning advancements in spam detection, text classification, and ensemble methods, spanning the last five years.Recent research has illuminated the potential of deep learning models, ensemble techniques, and innovative feature extraction methods, shaping the groundwork for the proposed ensemble-based approach for spam detection.The transformative impact of deep learning in text classification is evident through breakthrough models like BERT(Devlin et al., 2019)and the diverse architectures explored by Chen et al. (2020).These studies accentuate the significance of contextual understanding and feature extraction, pivotal for the success of our ensemble approach.Focusing on email spammers, this study introduces graph embedding for detection, aligning with the proposed approach's decision fusion and context-awareness (L.Shi et al., 2021).This paper demonstrates a deep learning approach for detecting spam on Twitter, offering insights into social media-specific spam characteristics.The exploration of diverse platforms enriches the proposed approach's scope (F.M. Couto et al., 2019).While focused on cyberbullying, this study highlights sentiment analysis's role in detection, correlating with the ensemble-based decision fusion strategy's sentimentbased analysis (M.M. Zulfikar et al., 2020).The detection of malicious URLs (Gupta & Soni, 2020) aligns conceptually with spam detection, reinforcing the importance of algorithm selection and evaluation.Additionally, Maatuk and Abbass (2020) highlight the contextual nuances of spam detection in online social networks, mirroring the decision fusion component's emphasis on context-aware analysis.These related works contribute to the advancement of text classification by exploring various deep learning architectures, transfer learning, ensemble techniques, and other machine learning algorithms.They provide valuable insights and benchmark results, inspiring further research in this critical domain.
)
Here, is sigmoid non-linear function, is the tangent non-linear function. , , , and , . , are learnable weights.⊙ refers element-wise multiplication. and −1 denotes the cell state at and -1, ht and ℎ −1 denotes the hidden-state at time -1, and means the th time step.N.
Table 4 :
Comparison of precision
Table 9 :
Overall comparative analysis Across varying dataset sizes, Ens_RNN consistently outperforms its counterparts, achieving remarkable accuracy, precision, recall, specificity, and maintaining low false positive and false negative rates.The accuracy comparison (Table 3, Fig 2) reveals Ens_RNN's exceptional performance, starting with a high accuracy of 97% for 2000 samples and steadily improving to an impressive 98.6% for 10,000 samples.Precision values (Table 4, Fig 3) showcase Ens_RNN's dominance, reaching an extraordinary 99.7% for 10,000 samples, while SVM, RF, and NB maintain relatively stable precision levels.Ens_RNN's recall rates (Table 5, Fig 4) consistently outshine other methods, emphasizing its strong ability to identify and classify spam messages effectively.Specificity values (Table 6, Fig 5) further highlight Ens_RNN's reliability in accurately classifying legitimate messages, starting with an impressive 98.8% for 2000 samples and maintaining this elevated performance.The comparison of false positive rates (FPR) (Table 7, Fig 6) underscores Ens_RNN's capability to reduce false positives, contrasting with an increasing trend in FPR for other methods.Additionally, the analysis of false negative rates (FNR) (Table 8, Fig 7) accentuates Ens_RNN's consistency in minimizing misclassifications of actual spam messages.In summary, Ens_RNN emerges as a robust and effective solution for spam detection, consistently outperforming traditional methods across multiple performance metrics, thereby affirming its potential in enhancing the reliability and efficiency of spam detection in diverse digital communication channels. | 4,862.2 | 2024-04-08T00:00:00.000 | [
"Computer Science"
] |
Recent trends of groundwater temperatures in Austria
Climate change is one of if not the most pressing challenge modern society faces. Increasing temperatures are observed all over the planet and the impact of climate change on the hydrogeological cycle has long been shown. However, so far we have insufficient knowledge on the influence of atmospheric warming on shallow groundwater temperatures. While some studies analyse the implication climate change has for selected wells, large-scale studies are so far lacking. Here we focus on the combined impact of climate change in the atmosphere and local hydrogeological conditions on groundwater temperatures in 227 wells in Austria, which have in part been observed since 1964. A linear analysis finds a temperature change of +0.7± 0.8 K in the years from 1994 to 2013. In the same timeframe surface air temperatures in Austria increased by 0.5± 0.3 K, displaying a much smaller variety. However, most of the extreme changes in groundwater temperatures can be linked to local hydrogeological conditions. Correlation between groundwater temperatures and nearby surface air temperatures was additionally analysed. They vary greatly, with correlation coefficients of −0.3 in central Linz to 0.8 outside of Graz. In contrast, the correlation of nationwide groundwater temperatures and surface air temperatures is high, with a correlation coefficient of 0.83. All of these findings indicate that while atmospheric climate change can be observed in nationwide groundwater temperatures, individual wells are often primarily dominated by local hydrogeological conditions. In addition to the linear temperature trend, a step-wise model was also applied that identifies climate regime shifts, which were observed globally in the late 70s, 80s, and 90s. Hinting again at the influence of local conditions, at most 22 % of all wells show these climate regime shifts. However, we were able to identify an additional shift in 2007, which was observed by 37 % of all wells. Overall, the step-wise representation provides a slightly more accurate picture of observed temperatures than the linear trend.
Abstract. Climate change is one of if not the most pressing challenge modern society faces. Increasing temperatures are observed all over the planet and the impact of climate change on the hydrogeological cycle has long been shown. However, so far we have insufficient knowledge on the influence of atmospheric warming on shallow groundwater temperatures. While some studies analyse the implication climate change has for selected wells, large-scale studies are so far lacking. Here we focus on the combined impact of climate change in the atmosphere and local hydrogeological conditions on groundwater temperatures in 227 wells in Austria, which have in part been observed since 1964. A linear analysis finds a temperature change of +0.7 ± 0.8 K in the years from 1994 to 2013. In the same timeframe surface air temperatures in Austria increased by 0.5 ± 0.3 K, displaying a much smaller variety. However, most of the extreme changes in groundwater temperatures can be linked to local hydrogeological conditions. Correlation between groundwater temperatures and nearby surface air temperatures was additionally analysed. They vary greatly, with correlation coefficients of −0.3 in central Linz to 0.8 outside of Graz. In contrast, the correlation of nationwide groundwater temperatures and surface air temperatures is high, with a correlation coefficient of 0.83. All of these findings indicate that while atmospheric climate change can be observed in nationwide groundwater temperatures, individual wells are often primarily dominated by local hydrogeological conditions. In addition to the linear temperature trend, a step-wise model was also applied that identifies climate regime shifts, which were observed globally in the late 70s, 80s, and 90s. Hinting again at the influence of local conditions, at most 22 % of all wells show these climate regime shifts. However, we were able to identify an additional shift in 2007, which was observed by 37 % of all wells. Overall, the step-wise representation provides a slightly more accurate picture of observed temperatures than the linear trend.
Introduction
The thermal regime in the ground is coupled with the conditions in the atmosphere, and air temperature variations leave their traces in the ground. While already at a depth of a few metres, the amplitudes of periodic diurnal and seasonal temperature trends are strongly attenuated (Taylor and Stefan, 2009), long-term non-periodic changes of air temperature permanently influence the subsurface down to greater depths of several tens to hundreds of metres (Beltrami et al., 2005). Worldwide, borehole temperature profiles therefore show an increase in surface air temperature (SAT) due to recent climate (Huang et al., 2000;Harris and Chapman, 1997). In borehole climatology, the focus is set on "dry" boreholes in undisturbed natural areas, that is, boreholes with negligible influence of groundwater flow and no direct human impacts. Borehole temperatures logged in such boreholes can be used to invert vertical conductive heat transport models for deriving the corresponding trend of ground surface temperature (GST). By assuming that GST and SAT are directly coupled or similar, past climate can be reconstructed. Many boreholes, however, are located in urbanized areas and regions with past changes in land cover, where often accelerated ground heat flux and higher GST are observed (Bense and Beltrami, 2007;Menberg et al., 2013;Bayer et al., 2016;Cermak et al., 2017). Moreover, in humid climate regions boreholes are mostly not dry, but drilled for groundwater use or monitoring. When dynamic groundwater flow conditions ex-3144 S. A. Benz et al.: Recent trends of groundwater temperatures in Austria ist, then advective heat transport can substantially affect the thermal regime in the subsurface (Ferguson et al., 2006;Kollet et al., 2009;Taylor and Stefan, 2009;Stauffer et al., 2017;Westaway and Younger, 2016;Uchida et al., 2003). Additionally, recharge processes, including snowmelt and rainderived recharge, might impact the thermal regime of the shallow subsurface. Previous studies, however, indicate that in many cases their influence can be neglected. Ferguson and Woodbury (2005) and Bense and Kurylyk (2017) demonstrated that it is possible to estimate groundwater recharge by using temperature-depth profiles based on the common assumption that the mean annual groundwater recharge temperature is equal to the mean annual surface air temperature. Menberg et al. (2014) showed in their study that the contribution of snowmelt-induced recharge with low temperature is minor in comparison to the overall recharge. Finally, Molina-Giraldo et al. (2011) investigated the impact of seasonal temperature signals on an aquifer upon bank infiltration, also including varying groundwater recharge temperatures. They showed that the convective heat transfer by groundwater recharge compared to conduction through the unsaturated zone and convection within the aquifer is of minor impact. Still, the interplay of long-term climate variations, land use change and groundwater produces a complex transient system, which is difficult if not impossible to accurately understand based on a few borehole measurements (Irvine et al., 2016;Kupfersberger et al., 2017;Kurylyk et al., 2017Kurylyk et al., , 2014Kurylyk et al., , 2013Taniguchi and Uemura, 2005;Taniguchi et al., 1999;Zhu et al., 2015).
The consequence of climate change for aquifers was illuminated with respect to groundwater recharge and availability of freshwater resources (Moeck et al., 2016;Scibek and Allen, 2006;Holman, 2006;Gunawardhana and Kazama, 2011;Loáiciga, 2003), groundwater quality impacts (Kolb et al., 2017) and effects on groundwater(-dependent) ecosystems (Burns et al., 2017;Jyväsjärvi et al., 2015;Kløve et al., 2014;Andrushchyshyn et al., 2009;Hunt et al., 2013). Taylor et al. (2012) summarized various connections and feedbacks between climate change and groundwater. A key parameter is the temperature, which is expected to increase in shallow groundwater globally following with some delay roughly the trends in the atmosphere. However, long-term measurements of temperature evolution in groundwater are rare (Watts et al., 2015;Figura et al., 2015). Instead, often well measurements taken at a few different time points are compared to indicate elevated temperatures, such as by Gunawardhana and Kazama (2011) for the Sendai Plain in Japan, by Šafanda et al. (2007) for boreholes in the Czech Republic, Slovenia, and Portugal, and Yamano et al. (2009) and Menberg et al. (2013) for urban areas in eastern Asia and central Europe. Others, such as Kupfersberger (2009) and Menberg et al. (2014), examine repeated temperature records of single or a few selected wells. The work by Lee et al. (2014) is one of the very few studies on long-term groundwater temperature (GWT) time series recorded for a larger area. They applied linear regression to hourly temperature data recorded from 2000 to 2010 at 78 South Korean national groundwater monitoring sites. They found a mean increase of 0.1006 K year −1 and concluded that shallow ground and surface temperature show moderate proportionality. Lee et al. (2014), however, reported that 12 wells revealed decreasing GWT trends without further details on potential factors. Blaschke et al. (2011) applied trend analyses to long-term data sets of mean annual GWT of 112 and 255 wells for the time periods 1955-2006 and 1976-2006 respectively in Austria. They found increasing trends of the GWT in shallow porous aquifers related to increasing air temperature. Similar insights from other regions are still lacking, and the contribution of atmospheric warming to long-term GWT evolution is nearly unexplored.
In the presented study, GWTs of 227 wells in Austria, measured in part since 1966, are analysed and regional patterns and temperature anomalies are identified. In contrast to Blaschke et al. (2011), the focus here is not only set on linear trends, but also on detection of climate regime shifts in the measured GWT, following the suggestions by Figura et al. (2011) and Menberg et al. (2014). As a relevant mode of global climate variability, long-lived decadal patterns such as the Atlantic or Pacific decadal oscillation have been identified, e.g. Minobe (1997) and Rodionov (2004). These control atmospheric temperatures as well and are often described as sudden, step-wise temperature changes separating stable periods, called climate regimes. Even if these regime shifts arrive attenuated and delayed in shallow groundwater, they can be detected and thus can offer another hint at the influence of climatic variations. Aside from the statistical analysis of GWT time series, the influence of land cover as well as their correlation with surface air temperature are investigated to scrutinize potential local influences on the measured data. The Austrian Alps as the main part of the European Eastern Alps are characterized by a complex geology with various lithologies and were built up during multiple tectonic phases striking now in a west-east direction. The complexity of the tectonic and geologic settings of the European Alps and in particular of the European Eastern Alps is described and discussed by numerous authors (e.g. Schmid et al., 2004;Linzer et al., 2002). Active tectonic evolution resulting in high topography and uplift rates coincided largely with high stream power (Robl et al., 2017(Robl et al., , 2008 and thus had an impact on the drainage system of the Alps. During the Pleistocene the Alps were affected by glaciations with a strong impact on the morphology, in particular on the inner Alpine valleys and the foreland. Due to sedimentation during the Holocene these areas now contain Quaternary porous aquifers. The herein analysed wells are located in shallow aquifers representing these Quaternary sediments in the inner Alpine valleys and foreland basin. Based on a compiled geology a hydrogeological overview as a hydrogeological map of Austria is provided by Schubert et al. (2003).
Climate and climate trends during the last 2 centuries (1800-2000) of the Great Alpine Region (European Alps and their surrounding foreland, GAR) were intensively investigated during the last few decades, yielding the HISTALP data set (Auer et al., 2007). This data set left its mark on the regional classification of climate zones by Köppen-Geiger, where Austria is mainly divided into three climate zones, warm temperate, boreal, and Alpine.
Groundwater temperatures
In Austria, GWTs up to December 2013 are provided by the Austrian Federal Ministry of Sustainability and Tourism Directorate-General IV. -Water Management (BMNT, 2016), the former Federal Ministry of Agriculture, Forestry, Environment and Water Management (BMLFUW) in 1138 wells. Here, we focus on all wells with a measurement depth of less than 30 m, a record of at least 20 years and no major breaks (> 3 months) in the last 20 years of the time series. Hence, all studied wells have been monitored since at least January 1994, and some already since 1966 (see Fig. S1a in the Supplement for more information). Additionally, wells impacted by geothermal hot springs were excluded. Overall, in this study annual mean data of 227 individual wells from all over the country (Fig. 1a) are analysed. Years with less than 9 months of data are excluded. For the timeframe 1994-2013, this amounts to 74 excluded data points in 60 wells. Additionally, only 9-11 months of data were available for 260 data points in 122 wells. To minimize the associated bias, these small gaps in the time series were filled using a linear fit. Hence small errors for years without a full set of monthly mean data have to be expected.
The average measurement depth in the wells is 7 ± 4 m below ground surface (Fig. S1b). All wells are located in the Cfb climate zone of the Köppen-Geiger classification, warm temperate climate with warm summers and no dry seasons (Rubel et al., 2017). The spatial median GWTs and inner 90 percentiles for all wells are displayed in Fig. 1b. The obtained temperature in Fig. 1b increases from around 9.8 • C in 1966 to 11.4 • C in 2013.
Following the CORINE Land Cover (CLC) data from 2012 ( Fig. S2a), 45 % of all wells are under artificial surfaces, 46 % under agricultural areas, and 9 % under forest following the 100 m × 100 m classification. In addition, CLC from 1990 was consulted; however, no land cover changes near any of the analysed wells are observed. Overall, for the time period 1994-2013, absolute GWTs of the monitored wells under artificial surfaces are on average 1.5 ± 0.3 K warmer than GWTs under forest; GWTs under agricultural areas are on average 0.6 ± 0.2 K warmer than GWTs under forest (Fig. S2b). This validates previous findings by Benz et al. (2017b) for GWTs in Germany, who identified even larger differences of up to 3 K between the individual land cover classes.
Surface air temperatures
Surface air temperatures (SATs) within Austria are monitored by the Central Institution for Meteorology and Geodynamics (ZAMG), Austria. In this study data from 12 individual weather stations are being analysed, each one located within 5 km of at least one analysed well and in the same climate zone (Cfb). Their location is displayed in Fig. 1a. Again, annual mean data were available for a time period of 1966-2013 (Fig. 1b). As expected and as previously shown in Benz et al. (2017b) for SAT and Benz et al. (2017a) for land surface temperatures, above-ground temperatures are generally lower than GWTs. All 12 analysed weather stations are located in areas classified as artificial surface and experienced no land cover changes. Within this study, the Spearman correlation coefficient was determined, as it is especially robust to outliers caused for example by heat waves, which impact air temperatures but have only minor effects on groundwater temperatures. When determining the correlation between two time series, missing years were ignored. Next to the correlation between GWT and SAT, correlations between all individual wells and weather stations were determined in order to create a plot similar to a (semi)variogram that shows the correlation between two measurement stations, depending on their distance to each other.
Linear analysis
Equivalent to the work by Lee et al. (2014), a linear temperature change was determined for all 227 wells. For this, a linear regression model of the annual mean temperature data was determined in Matlab (Version 2016b). Because all wells in our data set were continuously monitored between 1994 and 2013, only this timeframe was analysed.
Climate regime shifts
Climate data are often thought not to change linearly, but in the form of a step function, dividing a time series into individual climate regimes of a constant mean (Andrushchyshyn et al., 2009;Minobe, 1997). These regimes are changed when so-called climate regime shifts (CRS) occur and long-term mean values change. While several methods to model these shifts have been in use (Easterling and Peterson, 1995), in recent years the method by Rodionov (2004) has become standard. It identifies the significance of each possible shift by calculating the so-called Regime Shift Index (RSI): the cumulative sum of the normalized differences between the observed values and the long-term mean of the assumed regime. Only shifts with a positive RSI are considered significant, and a higher value of RSI denotes more pronounced CRS. The entire algorithm is described in detail by Rodionov (2004). This sequential analysis is data driven and requires no prior knowledge of the timing of possible shifts. It was updated to further include prewithening in order to reduce background noise (Rodionov, 2006) and is available online as a Microsoft Excel add-in (NOAA, 2017). In this study we applied the method to the complete time series of all 227 wells and 12 weather stations. Because the algorithm cannot handle gaps within the analysed series, gaps in our data were filled using a linear fit. All parameters were set to the same values as in the work by Menberg et al. (2014), who applied the method to four GWT time series in Germany. A target significance level of 0.15 was used by Menberg et al. (2014), and in our analysis, the cut-off length was set to 10 years and the Huber weight parameter was set to 1. No. of pairs of wells Figure 2. Influence of distance on the correlation between the annual means of two measurement points. (a) The correlation between SAT time series is given in red, and the median correlation between GWT time series is given in blue. The inner 90 percentile is coloured in grey, and the number of pairs of wells per distance is shown in dark blue below. (b) The colour gives the median correlation between GWTs of two wells in relation to their absolute distance to each other in the east-west direction (x axis) and in the north-south direction (y axis). Figure 2a displays the correlation between different wells or rather different weather stations in relation to their distance to each other. Shown is the distance between two wells/weather stations on the x axis and the corresponding Spearman correlation coefficient between them. For the weather station, each individual pair is shown by a red point; for GWTs, as there are many possible pairs of wells, the line gives the moving median (±25 km) correlation of all pairs at the corresponding distances. The inner 90th percentiles are shown in grey, and correlation coefficients close to or below zero are determined for several pairs of wells. However, here p values are generally also close to 1 and GWTs do not correlate. This is most likely due to local heat sources impacting at least one well in these pairs.
Correlations
As expected the moving average correlation decreases with distance. This decrease is more extreme in GWTs than in SATs and GWTs correlate less than SATs overall. This agrees with the observations in Benz et al. (2017b), who showed that annual mean GWTs show greater variations than SAT over the same distances.
Additionally, the correlation between two wells seems to be anisotropic: correlation coefficients between two wells decrease faster with north-south distance than with west-east distance (Fig. 2b), which can be explained by the dominant striking direction of the geology and the resulting topography in Austria, where valleys generally run from west to east.
Hence, larger rivers typically follow this direction and wells at the same latitude experience similar temperature signals.
In a next step, correlations between GWT and SAT are determined. On a country-wide scale the Spearman correlation coefficient between spatial median GWT and SAT (Fig. 1b) is 0.83. In comparison, the correlations between individual weather stations and wells are shown in Fig. 3; locations are displayed in detail in Fig. S3. Here correlations vary greatly and Spearman correlation coefficients are < 0.5 for about half of all wells within 5 km of a weather station. This indicates that GWTs are often influenced by local causes and not necessarily solely by local SATs. The lowest correlation is determined in Linz, where the groundwater is intensively used for cooling and heating (Krakow and Fuchs-Hanusch, 2016). The studied well is located within the city centre next to train tracks and office buildings. Hence, it is very likely that the thermal properties of the groundwater are dominated by anthropogenic influences from heated buildings and underground structures, as is often the case in subsurface urban heat islands (Menberg et al., 2013;Benz et al., 2015Benz et al., , 2016Attard et al., 2016). This would also explain the high GWTs that are on average 3.3 K warmer than the local annual mean SAT. Like the well, the weather station is also located within the city centre. The best correlations between individual pairs of a well and a weather station can be observed in the southern part of the city of Graz, where all wells and the weather station are located close to or within Graz airport, respectively. The well with the highest correlation of 0.80 with SAT is located less than 1 km from the weather station close to the airport parking lot next to suburban housing. It has been continuously monitored since 1970 and is the longest time series in the area. The well with the lowest correlation (0.45) with the weather station here is located slightly to the east near a dog park and suburban housing. Here observations started in 1994: it is the shortest time series in this area. At all other wells, measurements began in 1986 and show correlations between 0.6 and 0.7 with SAT, indicating that the duration of the measurements plays a significant role for local comparisons. In contrast, duration of the time series appears to be of minor importance on a country-wide scale. For example, the long time series in Wiener Neustadt (Fig. 3), which started measurements in 1970 and is located near a mineral extraction site, has a correlation of 0.48 and is therefore comparable to the short time series in Graz, starting in 1994 and located in a suburban area.
Additionally, measurement depths of GWT can have an impact on the correlation between SAT and GWT. While it is generally assumed that a measurement depth closer to the surface results in a better correlation with SAT as there is less of a shift between both data sets, this is only the case for some of the locations analysed here, such as Villach (Fig. S4a). In contrast, correlation increases with GWT measurement depth for other locations, such as the one in Graz. This might be related to local underground heat sources such as sewage systems impacting GWT near the surface more than temperatures at greater depth. However, as the depth of the wells analysed here varies only slightly, no definite conclusions can be drawn without further inspection of specific cases. Table 1 displays the correlations between spatial median GWT and spatial median SAT for each of the SAT locations in Figs. 3 and S3. For all locations with at least two wells besides Zeltweg and Graz correlation does improve when spatial median GWT is analysed instead of the individual locations. In all likelihood the spatial median GWT provides a more general temperature trend that is not influenced by local influences on temperatures such as construction work, plant development and shading, and is therefore more closely related to surface air temperatures.
In addition, the data indicate that city size, or rather population of a city, do not necessarily influence the correlation between GWT and SAT (Table 1). For example, both locations, Graz (population of more than 250 000) and Eisenstadt (population of 13 000), have similar correlation coefficients despite their different populations. Meanwhile, Bregenz and Feldkirch have a similar population (∼ 30 000) and number of wells (six), but different correlation coefficients (0.52 and 0.19). However, it is also important to note that not all wells analysed here are located in the city centre; still, all of them are in close proximity (< 250 m) to anthropogenically used areas (Fig. S3).
Linear temperature change
During the time between 1994 and 2013, GWTs changed on average by +0.36 ± 0.44 K per 10 years and SAT on average by +0.24 ± 0.13 K per 10 years. The lower changes in SAT are most likely due to the chosen timeframe: a heat wave in summer 1994 led to extraordinarily high annual mean SAT in this year (Fig. 1b) and thus impacted the determined linear temperature change. The increase in GWT is in good agreement with results of a former study considering data sets of Austria from 1976 to 2006 (Blaschke et al., 2011). However, it is more than double the global air temperature increase determined by Jones et al. (1999) for the timeframe 1978 to 1997, with +0.32 K in 20 years and less than the numbers given in the work by Ji et al. (2014). In their global study they give an air temperature increase of more than 0.4 K for the timeframe 2000 to 2009 for the northern mid-latitudes including Austria. Figure 4 displays a map and a histogram of all determined GWT changes. There appears to be no significant influence of land cover on the observed temperature change (Fig. S2c). Median temperature change is approximately 0.4 ± 0.4 K per 10 years for groundwater under artificial surfaces and forest areas, and 0.3 ± 0.5 K per 10 years under cultivated areas. However, temperature change decreases slightly with GWT measurement depth by approximately 0.015 K per 10 years per metre (Fig. S4b). This relationship can be related to deeper temperatures corresponding to earlier temperatures, when temperature increase was less severe. However, because the vast majority of temperatures are monitored at a depth of less than 15 m and show a high variability in linear temperature change, this number must be treated with caution. R 2 of the fit is only 0.02 and RMSE is 0.4 K.
To evaluate the goodness of this linear approach when representing climate change, RMSE of the fit was determined for each well for 1994 to 2013. We found an average RMSE of 0.4 ± 0.2 K.
When looking at the individual wells, no obvious spatial pattern for temperature changes is visible (Fig. 4). However, most wells with temperature changes lower than the 5th percentile are located close to the Drava River in Ferlach, Villach, and Kleblach-Lind in the very south of Austria (Figs. 5 and S5). Although they are up to 80 km away from each other, all of these wells show a sudden drop in temperatures in the year 2007 (wells Ia, Ib, IIa, IIb, Va, and Vb marked in blue in Fig. 5). This temperature reduction can be seen in most of the 27 wells that are less than 1 km from the Drava (Fig. S6); for 24 of these wells, temperatures in 2006 are more than 0.6 K warmer than temperatures in 2008. How- ever, temperatures (as well as additional parameters such as water level) within the river do not indicate any connection between this sudden temperature reduction and the Drava River (Fig. S6). Either way, further research is necessary to identify the cause of this temperature anomaly. Additionally, three other wells in the lowest 5 % of temperature change are all located less than 10 km from each other near the village of Kappel am Krappfeld (wells IVa, b and c marked in orange in Fig. 5). They and also additional surrounding wells show a steep decline in temperatures in 2006 before temperatures start to increase steadily again. These wells seem to be affected by the new drinking water supply (four wells with a total pumping rate of about 100 L s −1 ) located about 1 km to the south. This demonstrates the importance of including groundwater flow when trying to interpret groundwater temperature. In general, most of the extreme changes in temperature appear to be linked to local causes and do not happen gradually, but rather rapidly over the short time span of 1 or 2 years. Another example of this can be seen in wells with temperature changes higher than the 95th percentile (Figs. 5 and S7). While these highest 5th percentiles of all wells do not show local clusters to the same extent as the lowest 5th percentiles and can be observed all over the country, three wells (1a, 1b and 1c, marked in dark blue in Fig. 5) 1997 is likely the cause of the sudden temperature increase, but concrete evidence could not be identified.
Climate regime shifts
All detected CRS of the spatial median temperatures time series are shown in Fig. 6a. Overall GWTs increased by 1.2 K between the first and last CRS and SAT increased by 1.5 K.
Global CRS in air and also groundwater were detected for the late 70s, the late 80s and the late 90s by Menberg et al. (2014). Using the same algorithm spatial median annual mean GWT and SAT in Austria show shifts in the late 80s and 90s (Fig. 6a). GWTs show additional shifts in 1981 and 2007. While the shift in the late 80s is observed during the same year (1988) in GWT and SAT, the shift in the late 90s appears earlier and is more significant in GWTs. However, because SATs are the drivers of GWTs and not vice versa, the fact that the GWT change precedes the SAT change suggests that this method does not have the necessary resolution to determine short time lags between SATs and GWTs. Accordingly the detected time shifts in wells within 5 km of a weather station generally do not indicate the same CRS as the weather station: of 56 CRS observed in at least one well only, 12 are also observed in a nearby weather station no more than 1 year before (Fig. S8). However, it is also important to note that some of the analysed time series only span a 20-year time period and are thus on the shorter end for a statistically relevant analysis of climate regime shifts (Rodionov, 2006). Like with the linear approach, the goodness of the CRS and corresponding statistical step model was evaluated by determining the RMSE for the time period 1994-2013. We determined a mean RMSE value of 0.3 ± 0.1 K, which is slightly better than the RMSE for the linear fit as determined above (0.4 ± 0.2 K). Only 20 of the 227 analysed wells have a better RMSE with the linear approach than the statistical step model of the CRS approach. Hence, we conclude that the CRS method is slightly more appropriate for simulating temperature changes in groundwater than a linear approach, even for time periods as short as 20 years. However, when the individual wells and weather stations are analysed (Fig. 6b), globally observed CRS can be identified in at most 22 % (1988) of all wells. Results further show that the shift in the 90s is temporally more spread out than the shifts in the 70s and 80s in both GWT and SAT. This indicates that this shift is less well defined, and temperatures of the globe became more variable in their temporal evolution. In accordance with this interpretation there is a higher percentage of wells with CRS in all years after 1996 than before. Furthermore, more than one-third of all weather stations and wells detect a shift in 2007, indicating this year as the start of a new climate regime within Austria. While CRS in 2007 were not observed by Menberg et al. (2014), who studied earlier time series than here, this year was also identified by Litzow and Mueter (2014) as the start of a new regime for both climate and biological indicators within the North Pacific Ocean. However, the dimension of the shifts does not always agree for all wells. For example, wells experiencing a shift observed in 2007 include all wells along the Drau observed in Figs. 5 and S5, which show a sudden drop in temperature for this year. In contrast, the countrywide time series in Fig. 6a indicates a positive shift in temperatures.
Conclusions
Temperatures in 227 shallow wells and 12 weather stations in Austria, monitored in part since 1966, were analysed in this study. Linear temperature change was determined and revealed a general increase in temperature between the years 1994 and 2013 of approximately +0.36 ± 0.44 K per 10 years in the groundwater and on average +0.24 ± 0.13 K per 10 years in the air. Most extreme changes in groundwater temperatures, especially temperature decrease, could be linked to local causes such as the installation of a new drinking water supply that influences nearby groundwater wells. This reveals the extent to which groundwater temperatures are dominated by local events, groundwater flow, and the thermal properties of the surroundings. When solving local problems we can therefore not recommend relying on average relationships valid on a nation scale. Accordingly the correlation between annual mean groundwater temperatures and nearby (< 5 km) air temperatures varies greatly, from −0.3 in Linz to 0.8 near Graz. However, if spatial median groundwater temperatures and surface air temperatures of all of Austria are compared, we find a significant correlation of 0.83, demonstrating once more that groundwater temperatures are closely linked to surface temperatures and therefore experience climate change. Globally observed climate regime shifts in the late 70s, 80s and 90s could only be identified in approximately 20 % of all wells. Nevertheless, we were able to observe another shift in 2007 in 37 % of all wells and 33 % of all weather stations, indicating this year as the possible start of a new climate regime within the Alpine region. Still, further research dedicated to other climate parameters such as permafrost and snowfall is necessary to validate these findings. Additionally, our observations made in Austria should be transferred to similar regions in the world testing the transferability of the presented results. Overall cli-mate regimes represent measured temperature slightly better (RMSE: 0.3 ± 0.1 K) than the linear fit (RMSE: 0.4 ± 0.2 K).
Data availability. The GWT data used in the study were provided by the Austrian Federal Ministry of Sustainability and Tourism Directorate-General IV. -Water Management (BMNT, 2016) and are available to the public under http://ehyd.gv.at/.
The SAT data used in this study were acquired through the Central Institution for Meteorology and Geodynamics (ZAMG), Austria, in 2017. | 8,027.6 | 2017-11-23T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Simulation-Based Inference with Neural Posterior Estimation applied to X-ray spectral fitting Demonstration of working principles down to the Poisson regime
,
Introduction
X-ray spectral fitting relies generally on frequentist and Bayesian approaches; see Buchner & Boorman (2023) for a recent and comprehensive review on the statistical aspects of X-ray spectral analysis.Fitting of X-ray spectra with neural networks has been introduced by Ichinohe et al. (2018) for the analysis of high spectral resolution galaxy cluster spectra and recently by Parker et al. (2022) for the analysis of lower resolution Athena Wide Field Imager spectra of Active Galactic Nuclei.Parker et al. (2022) showed that neural networks delivered comparable accuracy to spectral fitting, while limiting the risk of outliers caused by the fit getting stuck into a local false minimum (the nightmare of anyone involved in X-ray spectral fitting), yet providing an improvement of around three orders of magnitude in speed, once the network had been properly trained.On the other hand, no error estimates on the spectral parameter were provided in the methods explored by Parker et al. (2022).
However, in a Bayesian framework, accessing the posterior distribution is possible through the simulation-based inference with amortized neural posterior estimation methodology; hereafter SBI-NPE (Papamakarios & Murray 2016;Lueckmann et al. 2017;Greenberg et al. 2019;Deistler et al. 2022), see also Cranmer et al. (2020) for a review of simulation-based inference.In this approach, we sample parameters from a prior distribution and generate synthetic spectra from these parameters.Those spectra are then fed to a neural network that learns the association between simulated spectra and the model parameters.The trained network is then applied to data, to derive the parameter space consistent with the data and the prior, being the posterior distribution.In contrast to conventional Bayesian inference, SBI is also applicable when one can run model simulations, but no Fig. 1.The Simulation-Based Inference approach emulates the traditional Bayesian inference approach.When assessing the parameters of a model, one first defines prior distributions, then defines the likelihood of a given observation, often using a forward-modeling approach.This likelihood is further sampled to obtain the posterior distribution of the parameters.The simulation-based approach does not require explicit computation of the likelihood, and instead will learn an approximation of the desired distribution (i.e. the likelihood or directly the posterior distribution) by training a neural network with a sample of simulated observations.formula or algorithm exists for evaluating the probability of data given the parameters, i.e. the likelihood.SBI-NPE has demonstrated its power in many fields, including astrophysics, e.g. to cite a few, for reconstructing galaxy spectra and inferring their physical parameters (Khullar et al. 2022), for inferring variability parameters from dead-timeaffected light curves (Huppenkothen & Bachetti 2022), for exoplanet atmospheric retrieval (Vasist et al. 2023), for deciphering the ring down phase signal of the black hole merger GW150914 (Crisostomi et al. 2023) and very recently for isolated pulsar population synthesis (Graber et al. 2023).
In this paper, we demonstrate for the first time the power of SBI-NPE for X-ray spectral fitting, to show that it delivers performances fully consistent with the XSPEC (Arnaud 1996) and the Bayesian X-ray Analysis (BXA) spectral fitting packages (Buchner et al. 2014); two of the most commonly used tools for X-ray fitting.The paper is organized as follows.In Sect.2, we give some more insights on the SBI-NPE method.In Sect.3, we present the methodology to produce the simulated data, introducing a method to reduce the prior range.In Sect.4, we show examples of single round inference in the Gaussian and Poisson regimes for simulated mock data.In Sect.5, we present a case based on multiple round inference.In Sect.6, we demonstrate the robustness of the technique against local minima trapping.In Sect.7, using a simple model, we apply the Principal Component Analysis to reduce the data fed to the network.In Sect.8, we show the performance of SBI-NPE on real data, as recorded by the NICER X-ray instrument (Gendreau et al. 2012).In Sect.9, we discuss the main results of the paper, listing some avenues for further investigations.This precedes a short conclusion.
Formalism
The SBI approach, illustrated in Fig. 1, aims at computing the probability distribution of interest, in this case the posterior distribution p(θ|x), by learning an approximation of the probability density function from a joint sample of parameters {θ i } and the associated simulated observable {x i }, using neural density estimators such as normalizing flow.A normalizing flow T is a diffeomorphism between two random variables, say X and U, which links their following density functions as follows: where J T is the Jacobian matrix of the normalizing flow.The main idea when using normalizing flows is to define a transformation between a simple distribution (i.e.normal distribution) and the probability distribution that should be modelled, which eases the manipulation of such functions.To achieve this, an option is to compose several transformations T i to form the overall normalizing flow T , each parameterized using Masked Auto-encoders for Density Estimation (MADE, Germain et al. 2015), which are based on deep neural networks.MADEs satisfy the auto-regressive properties necessary to define a normalizing flow and can be trained to adjust to the desired probability density.Stacking several MADEs will form what is defined as a Masked Auto-regressive Flow (MAF, Papamakarios et al. 2017).We refer interested readers to the following reviews by Papamakarios et al. (2021); Kobyzev et al. (2021).Greenberg et al. (2019) developed a methodology which enabled the use of MAFs to directly learn the posterior distribution of a Bayesian inference problem, using a finite set of parameters and associ-ated observable {θ i , x i }.Using this approach, one can compute an approximation for the posterior distribution q(θ, x) ≃ p(θ|x), which can be used to obtain samples of spectral model parameters from the posterior distribution conditioned on an observed X-ray spectrum.
The python scripts from which the results presented here use the sbi1 package (Tejero-Cantero et al. 2020).sbi is a PyTorchbased package that implements SBI algorithms based on neural networks.It eases inference on black-box simulators by providing a unified interface to state-of-the-art algorithms together with very detailed documentation and tutorials.It is straightforward to use, involving the call of just a few Python functions.
Amortized inference enables the evaluation of the posterior for different observations without having to re-run inference.On the other hand, multi-round inference focuses on a particular observation.At each round, samples from the obtained posterior distribution computed at the observation are used to generate a new training set for the network, yielding in a better approximation of the true posterior at the observation.Although fewer simulations are needed, the major drawback is that the inference is no longer be amortized, being specific to an observation.We will try both approaches in the sections below.
Bench-marking against the known likelihood
SBI implements machine learning techniques in situations where the likelihood is undefined, hampering the use of conventional statistical approaches.In our case, the likelihood is known.Here we recall the basic equations.Taking the notation of XSPEC (Arnaud 1996), the likelihood of Poisson data (assuming no background) is known as : where S i are the observed counts in the bin i as recorded by the instrument, t the exposure time over which the data were accumulated, and m i the predicted count rates based on the current model and the response of the instrument folding in, its instrument efficiency, its spectral resolution, its spectral coverage. . .see (Buchner & Boorman 2023) for details on the folding process.The associated negative log-likelihood, given in Cash (1979) and often referred to as the Cash-statistic, is: The final term which depends exclusively on the data (and hence does not influence the best-fit parameters) is replaced by its Stirling's approximation to give : This is what is used for the statistic C-stat option in XSPEC the best fit model is the one that leads to the lowest C-stat2 .The default XSPEC minimization method uses the modified Levenberg-Marquardt algorithm based on the CURFIT routine from Bevington & Robinson (2003).In the following sections, we use XSPEC with and without Bayesian inference, and compute Markov Chain Monte Carlo (MCMC) chains to get the parameter probability distribution and to compute errors on the best fit parameters, as to enable a direct comparison with the posterior distributions derived from SBI-NPE.By default, we use the Goodman-Weare algorithm (Goodman & Weare 2010), with 8 walkers, a burn-in phase of 5000 and a length of 50000.The analysis was performed with the pyxspec wrapper of XSPEC v.12.13.1 (Arnaud 1996).
In addition to XSPEC we have used the Bayesian X-ray Analysis (BXA) software package (Buchner et al. 2014) for the validation of our results.Among many useful features, BXA connects XSPEC to the nested sampling algorithm as implemented in UltraNest (Buchner 2021) for Bayesian Parameter Estimation.BXA finds the best fit, computes the associated error bars and marginal probability distributions, see Buchner & Boorman (2023) for a comprehensive tutorial on BXA.We run the BXA solver with default parameters, but note that there are different options to speed up BXA, including the possibility to parallelize BXA over multiple cores, as discussed in Buchner & Boorman (2023).
We now introduce a method to restrict the prior range, with the objective of providing the network a training sample that is not too far from the targeted observation(s).This derives in part from the challenge, that for this work the generation of spectra, the inference, the generation of the posteriors should be performed on a MacBook Pro 2.9 GHz 6-Core Intel Core i9, within a reasonable amount of time. 1. Prior assumption for the emission models considered.The models are defined following the XSPEC convention.N H is given in units of equivalent hydrogen column (in units of 10 22 atoms cm −2 ).Gamma is the power law photon index.NormPL is the power law normalization at 1 keV in units of photons/keV/cm 2 /s.kTbb is the blackbody temperature in keV.NormBB is the blackbody normalization given as R 2 km /D 2 10 , where R km is the source radius in km and D 10 is the distance to the source in units of 10 kpc.
Generating an efficient training sample
The density of the training sample depends on the range of the priors and the number of simulations.As the training time increases with the size of the training sample, ideally, one would like to train the network with a limited number of simulated spectra that are not too far from the targeted observation(s), yet covering plainly the observation(s).Here, we consider two methods to restrict the priors: one in which we train a network to retain the "good" samples of θ i matching a certain criterion (Sect.3.2), and one in which we perform a coarse inference of the targeted observation(s) (Sect.3.3).
Simulation set-up
Let us first describe our simulation set-up.We assume a simple emission model consisting of an absorbed power law with three parameters: the column density (N H ), the photon index (Gamma, Γ) and the normalization of the power law at 1 keV (NormPL).We used the tbabs model to take into account interstellar absorption, including the photoelectric cross-sections and the element abundances to the values provided by Verner et al. (1996) and Wilms et al. (2000), respectively.In XSPEC terminology, the model is tbabs * Powerlaw.For the simulations, we used NICER response files (for the observation identified in the NICER HEASARC archive as OBSID1050300108, see later).The simulated spectra are grouped in 5 consecutive channels so that each spectrum has ∼ 200 bins covering the 0.3 and 10 keV range.The initial range of prior is given in Table 1 for the model 1 set-up.We assume uniform priors in linear coordinates for N H and for Γ and in logarithmic coordinates for the power law normalization.The generation of synthetic spectra is done within jaxspec3 , which offers parallelization of a fakeit-like command in XSPEC (Dupourqué et al. in preparation).The generation of 10000 simulations takes about 10 seconds (including a few seconds of just-in-time compilation).In this first paper, we do not consider instrumental background.However, we note that if a proper analytical model exists for the background, the network could be trained to learn about the source and the background spectra simultaneously (with more free model parameters than in the source alone case).This would come at the expense of increasing the size of the training sample, hence inference time.The background spectrum could also be incorporated simply as a nuisance parameter, by adding to each bin of each simulated spectrum a number of counts with an empirical distribution estimated from the background spectrum.This would increase the dispersion in the spectrum, which would translate into an additional source of variance on the constraints of the model parameters.More details on this approach will be provided in the forthcoming paper by Dupourqué & al. (in preparation).
A restricted prior
For the first method, we train a ResNet classifier (He et al. 2015) to restrict the prior distributions (Lueckmann et al. 2017;Deistler et al. 2022).sbi can be used to learn regions of the parameter space producing valid simulations and distinguish them from regions that lead to invalid simulations.The process can be iterative, expecting that, as more simulations are fed to the classifier, the rejection rate will increase, and the restricted prior will shrink.It can be stopped when the fraction of valid simu-lations exceeds a given threshold, depending on the criterion.In all cases, it is recommended to perform sanity checks of the coverage of the restricted prior for the observation(s) to fit.The user has to define a decision criterion for the classifier.For demonstrating the working principle of SBI-NPE under various statistical regimes (total number of counts in the X-ray spectra, see below), the first criterion we chose is to restrict the total number of counts in the spectra within a given range.Secondly, we have also considered a criterion such that the valid simulations are the ones providing the lowest C-stat computed from the observation to fit (Cash 1979).Once the simulations are produced, the classifier is also very fast (minute timescales), depending on the condition to match and the number of model parameters and size of the training sample (as an indication, for the three parameter model considered below, a few thousands simulations are required).We note that such a classifier is straightforward to implement and could be coupled to classical X-ray spectral fitting, to initialize the fit closer to the best fit solution, and thus reduce the likelihood of getting stuck into a local false minimum.
Initial prior
Restricted prior at iteration 10 Log(NormPL) Fig. 2. The initial and restricted prior, tuned to produce spectra that have between 10000 and 100000 counts for a tbabs * powerlaw model (at its tenth iteration).As expected, only a restricted range of the power law normalization can deliver the right number of counts.
The first statistical regime to be probe is the so-called highcount Gaussian regime.We define the integration time of the simulated spectra, identical to all spectra and defined such that a reference model (N H =0.2 × 10 22 cm −2 , Gamma=1.7 and NormPL=1) corresponds to a spectrum with about 20000 counts (over 200 bins).This provides the reference spectrum (referred as Spectrum 20000 counts).We define the criterion for the restrictor, such that the valid simulations are the ones which have between 10000 and 100000 counts, ensuring that the reference spectrum (Spectrum 20000 counts) is well covered.In Fig. 2, we show the initial and final round of the restricted prior, derived with the above criterion.Given the integration time for the spectra, only a restricted set of model parameters, mostly the normalization of the power law component, can deliver the right number of counts per spectrum.
Didier Barret and Simon Dupourqué: Neural networks for X-ray spectral fitting
Initial prior
Restricted prior from coarse inference Log(NormPL) Fig. 3.The initial and restricted prior computed from a coarse and quick inference of a reference spectrum of 20000 counts.Such a coarse inference can be seen as the first step of a multiple round inference.
Coarse inference
The second method uses a coarse inference, and can be considered as the first step of a multiple round inference.A coarse inference is when the network is trained with a limited number of samples (see Sec. 4.1 for the parameters used to run the inference).The posterior conditioned at the reference observation is then used as the restricted prior.In Fig. 3, we present the result of a coarse inference of the above reference spectrum (Spectrum 20000 counts).5000 spectra are generated from the initial prior as defined in Tab. 1 for the above model, fed to the neural network, and the posterior distributions are computed at the reference spectrum.The training for such a limited sample of simulation, for three parameters, takes about 1 minute.As can be seen, the prior range is further constrained to narrower intervals.Generating a sample of 10000 spectra with parameters from this restricted prior shows that the reference spectrum is actually close to the median of the sample of the simulated spectra (see Fig. 4).The robustness of SBI-NPE against local false minima trapping (see Sect. 6), guarantees good coverage of the observation from the restricted prior.
Single round inference
Starting from the restricted prior, one can then draw samples of θ i and generate spectra applying the Poisson count statistics in each spectral bin.The spectra are then binned the same way as the reference observation (grouped by 5 adjacent channels between 0.3 and 10 keV), and injected as such in the network (no zero mean scaling, no component reduction applied, see however Sect.7).For each run, we generate both a training and an independent test sample.The coverage of the restricted prior is indicated in red.The median of the spectra sampled from the restrictor (blue line) is to be compared with the reference spectrum (green line).
In the Gaussian regime
In the first case, we run the classifier with the criterion that each of the simulated spectra assuming an absorbed power law model having between 10000-100000 counts, spread over ∼ 200 bins, are valid simulations.This covers the Gaussian regime.
For three parameters, we generate a set of 10000 spectra.The network is trained in ∼ 3 minutes.The inference is performed with the default parameters of SNPE_C as implemented in sbi, which is using 5 consecutive MADEs with 50 hidden states each for the density estimation.It then takes about the same time to draw 20000 posterior samples for 500 test spectra, i.e. to fit 500 different spectra with the same network.The inferred model parameters versus the input parameters are shown in Fig. 5.As can be seen, there is an excellent match between the input and output parameters: the linear regression coefficient is very close to 1.For N H , a minimal bias is observed towards the edge of the sampled parameter space.
We generate the posterior distributions for the reference absorbed power law spectrum and compare it with the posterior distribution obtained from XSPEC switching on Bayesian inference 4 .The comparison is shown in Fig. 6.There is an excellent match between the best fit parameters, but also the posterior distribution, including their spread.In both cases, the C-stat of the best fit is within 1σ of the expected C-stat following Kaastra 4 XSPEC by default uses uniform priors for all parameters.For this run, we considered Jeffrey's prior for the normalization of the power law instead of a log uniform distribution.XSPEC adds to the fit statistic a contribution from the prior as −2 ln P prior (Arnaud 1996) (which is in our case a positive contribution equal to the 2 ln(NormPL)).Such a contribution is removed when comparing the results of XSPEC run with Bayesian inference, with the results of SBI-NPE.We note that in BXA, Jeffrey's priors have been deprecated for log uniform priors.(2017).Note that running XSPEC to derive the posterior distribution takes about the same time as training the network.
In the above case, we compute the goodness of the fit by comparing the measured fit statistic with its expected value following Kaastra (2017).XSPEC offers the possibility to perform a Monte Carlo calculation of the goodness-of-fit (goodness command), applicable in the case when the only source of variance in the data is counting statistics.This command simulates a number of spectra based on the model and returns the percentage of these simulations with the test statistic less than that for the data.If the observed spectrum is produced by the model then this number should be around 50%.XSPEC can generate simulated spectra either from the best fit parameters or from a Gaussian distribution centered on the best fit, with sigma computed from the diagonal elements of the covariance matrix.XSPEC can either fit each simulated spectra or return the fit statistic calculated immediately after creating the simulated spectrum.Fitting the data is the option by default, but at the expense of increasing the run time.Such a feature would be straightforward to implemented with SBI-NPE and amortized inference.The set of parameters of the simulated spectra can be drawn from the posterior distribution conditioned at the observation, and the posterior distributions for each simulated spectrum can be computed from the already trained amortized network (the fit statistic associated with each simulated spectrum is as usual derived from the median of the so-computed posterior distributions).With the simulation set-up considered here, generating the simulated spectra Single Round Inference (10000) XSPEC Fig. 6.The posterior distribution estimated for the reference absorbed power law spectrum (Spectrum 20000 counts) as inferred from a single round inference with a network trained on 10000 samples (green).The posterior distribution inferred from a Bayesian fit with XSPEC is also shown in blue.
and the posterior distributions is very fast (few minutes for 1000 spectra).
We now show the count spectrum corresponding to the reference spectrum of this run, together with the folded model and the associated residuals, in Fig. 7 for both SBI-NPE and XSPEC.As expected from Fig. 6, there is an excellent agreement between the fits with the two methods.
In the Poisson regime
The above results can be considered as encouraging, but to test the robustness of the technique, we must probe the low count Poisson regime.We repeat the run above but this time generating spectra from a restricted prior generating 200 bin spectra with a number of counts ranging between 1000-10000 for the absorbed power law model.The integration time of the spectra is scaled down from the Gaussian case, so that the reference model (N H =0.2 × 10 22 cm −2 , Gamma=1.7 and NormPL=1) now corresponds to a spectrum of ∼ 2000 counts (referred as Spectrum 2000 counts).To account for the lower statistics, we train the network with a sample of 20000 spectra, instead of 10000 for the case above.The training time still takes about 3 minutes.We generate the posterior for a test set of 500 spectra, and this takes again about 3 minutes.Similar to Fig. 5, we show the input and inferred model parameters for the test sample in Fig. 8.As in the previous case, although with larger error bars, accounting for the lower statistics of the spectra, there is an excellent match between the two quantities, the linear regression coefficient remains close to 1, with some evidence for the small bias on N H at both ends of the parameter range increased.
We generate the Posterior distribution for the reference spectrum (Spectrum 2000 counts), which we also fit with BXA (assuming the same priors as listed in Tab. 1 for the model 1 set-up).The posterior distributions are compared in Fig. 9, showing again an excellent agreement.Not only the best fit parameters are consis- tent with one another, but as important, the widths of the posterior distribution are also comparable.This demonstrates that SBI-NPE generates healthy posteriors.Note that the time to run BXA (with the default solver parameters and without parallelization) on such a spectrum is comparable with the training time of the network.
Multiple round inference
In the previous cases, the posterior is inferred using single-round inference.We are now considering multi-round inference, tuned for a specific observation; the reference spectrum of 2000 counts, spread over 200 bins (Spectrum 2000 counts).For the first iteration, from the restricted prior, we generate 1000 simulations, and train the network to estimate the posterior distribution.In each new round of inference, samples from the obtained posterior distribution conditioned at the observation (instead of from the prior) are used to simulate a new training set used for training the network again.This process can be repeated an arbitrary number of times.Here we stop after three iterations.The whole procedure takes about 1.5 minutes.In Fig. 10, we show that multi-round inference returns best fit parameters and posterior distributions consistent with single-round inference (from a larger training sample) and XSPEC.We thus confirm that multi-round inference can be more efficient than single round inference in the number of simulations and is faster in inference time.Its drawback is however that the inference is no longer amortized (i.e. it will only apply for a specific observation).
Sensitivity to local minima
Fit statistic minimization algorithms may get stuck into local false minima.There are different workarounds, such as computing the errors on the model parameters to explore a wider parameter space, shaking the fits with different sets of initial parameters, using Bayesian inference, all at the expense of increasing the processing times.This is probably the reason why fit statistic minimization remains widely used, despite its known limitations.This makes worth the comparison of the sensitivity to local minima of SBI-NPE with XSPEC is its common use.
For this purpose, we are now going to consider a 5 parameter model combining two overlapping components, a power law and a blackbody.In the XSPEC terminology, the model would be tbabs * (powerlaw+blackbody) (see Tab. 1 for the model 2 simulation set-up for the priors).We build a restricted prior so that such a model produces at least 10000 counts per spectrum, as decent statistics is required to constrain a 5 model parameters.
We train a network with 100000 simulated spectra.We then generate the posterior of 500 spectra that we fit with XSPEC, with three sets of initial parameters: the model parameters, a set of model parameters generated from the restricted prior, a set of model parameters from the initial prior (that do not meet necessarily our requirement on counts).We switch off Bayesian inference in the XSPEC fits, and return for each of the fits, the best fit C-stat statistics.In Fig. 11, we compare the C-stat of SBI-NPE as derived from a single round inference with the XSPEC Single Round Inference ( 20000) BXA fitting.This figure shows that SBI-NPE does not produce outliers, while the minimization does, at the level of a few percent.The latter is a known fact.The use of a restricted prior helps in reducing the trapping in local false minima, compared to considering the wider original prior, because the XSPEC fits starts closer to the best fit parameters.The most favorable, and unre- alistic, situation for XSPEC is when the fit starts from the model parameters.In some cases, SBI-NPE produces minimum C-stat that are slightly larger than those derived from XSPEC, indicating that the best fit solution was not reached.This may simply call for enlarging the training sample of the network for such a 5 parameter model, or considering multiple-round inference.
Dimension reduction with the Principal Component Analysis
Parker et al. ( 2022) introduced the use of principal component analysis (PCA) to reduce the dimension of the data to feed the neural network, and showed that it increased the accuracy of the parameter estimation, without any penalty on computational time, yet enabling simpler network architecture to be used.The PCA performs a linear dimension reduction using singular value decomposition of the data to project it to a lower dimensional space.Unlike Parker et al. (2022), we have here access to the posteriors and it is worth investigating whether such a PCA decomposition affects the uncertainty on the parameters estimates.
Considering the run presented in Sect. 4 for the case of single round inference in the Poisson regime, we decompose the 20000 spectra with the PCA, as to keep 90% of their variance (before that we scale the spectra to have a mean of zero and standard deviation of 1).This allows to reduce the dimension of the data from 20000 × 200 to 20000 × 60, i.e. a factor of 3 reduction, leading to a gain in inference time by a factor of 2. We show in Fig. 12 the input and output parameters from a single round inference trained on dimension reduced data.As can be seen, there is still an excellent agreement between the two with the linear regression coefficient close to 1, although the bias on N H at the edge of the prior interval seems to be more pronounced (the slopes for all parameters are less than 1, indicating that a small bias may have been introduced through the PCA decomposition).We show the posteriors of the fit of the reference spec- trum of 2000 counts, in comparison with XSPEC in Fig. 13 to show that the posteriors are not broadened by the dimension reduction.
Application to real data
Having shown the power of the technique on mock simulated data, it now remains to demonstrate its applicability to real data, recorded by an instrument observing a celestial source of X-rays (and not data generated by the same simulator used to train the network).This is a crucial step in machine learning applications.
We have considered NICER response files for the above simulations because we are now going to apply the technique to real NICER data recorded from 4U1820-303 (Gendreau et al. 2012;Keek et al. 2018).For the scope of the paper, we are going to consider two cases: a spectrum for the persistent X-ray emission (number of counts ∼ 200000) and spectra recorded over a type I X-ray burst, when the X-ray emission shows extreme time and spectral variability.
NICER data analysis
We have retrieved the archival data of 4U1820-303 from HEASARC for the observation identifier (1050300108), and processed them with standard filtering criteria with the nicerl2 script provided as part of the HEASOFTV6.31.1 software suite, as recommended from the NICER data analysis web page (NICER software version : NICER_2022-12-16_V010a). Similarly, the latest calibration files of the instrument are used throughout this paper (reference from the CALDB database is xti20221001).A light curve was produced between 0.3 and 7 keV band, with a time resolution of 120 ms, so that the type I X-ray burst could be located precisely.
Spectrum of the persistent emission
Once the burst time was located, we first extract a spectrum of the persistent emission for 200 seconds, ending 10 seconds before the burst.The spectrum is then modelled by a 5-component model, as the sum of an absorbed and a power law.In XSPEC terminology, the model is tbabs * (blackbody+powerlaw).The initial range of prior is given in Tab. 1 for the model 2 set-up.
For both SBI-NPE and XSPEC spectral fitting, for a change, we consider uniform priors in linear coordinates for all the parameters.Similarly, for this observation, we build a restricted prior with the criterion that the classifier keeps 25% of the model parameters associated with the lowest C-stat(considering a set of 5000 simulations for 5 parameters).From this restricted prior, we generate a rather conservative set of 100000 spectra for a single round inference, and a set of 5000 spectra for a multipleround inference considering only three iterations.It takes about 40 minutes to train the network with 100000 spectra with 5 parameters, and 12 minutes for the 3 iteration multiple-round inference.The posterior distribution from single and multiple round inference, and the XSPEC fitting are shown in Fig. 14.As can be seen, there is a perfect match between XSPEC and SBI-NPE, demonstrating that the method is also applicable to real data.We have verified that changing the assumptions for the priors, e.g.uniform in logarithmic scale for the normalizations of both the blackbody and the power law, yielded fully consistent results, in terms of best fit parameters, C-stat and posterior distributions.The same applies when using BXA instead of XSPEC.This is the first demonstration to date that SBI-NPE performs equally well as state-of-the-art X-ray fitting techniques, on real data.
The burst emission
The first burst observed with NICER was reported by (Keek et al. 2018;Strohmayer et al. 2019;Yu et al. 2023).The burst emission is fitted with a blackbody model and a component accounting for some underlying emission, which in our case is assumed to be a simple power law.The blackbody temperature and its normalization vary strongly along the burst itself, in particular in this burst, which showed the so-called photospheric expansion, meaning that the temperature of the blackbody drops while its normalization increases, to raise again towards the end of the burst.To follow spectral evolution along the burst, we extract fixed duration spectra (0.25 seconds), still grouped in 5 adjacent channels, so that all the spectra have the same number of bins ( 200).The number of counts per spectrum ranges from 400 to 5000, hence offering the capability of exploring the technique with real data in the Poisson regime.In this range of statistics, we cannot constrain a 5-parameter model.Hence, we fix the column density and the power law index to N H =0.2×10 22 cm −2 and Γ = 1.7 respectively.The initial range of prior is given in Table 1 for the model 3 set-up.
As we want to use the power of amortized inference, we use two methods to define our training sample.First, we apply the classifier with the condition to keep the model parameters associated with spectra with a number of counts ranging between 100-10000, hence covering plainly the range of counts of the observed spectra (400-5000).From the restricted prior, we consider arbitrarily 5000 simulated spectra per observed spectrum so that the network is trained with 23 × 5000 = 115000 samples.Second, we perform a coarse inference over the full prior range, and for each of the 23 spectra, we set the restricted prior as the posterior conditioned at the corresponding spectrum.The training sample is limited to 10000 spectra for the quick and coarse inference of the 23 spectra.For each of the 23 restricted prior, we then generate 2500 simulated spectra so that their ensemble will be used to train the network, i.e. with 23 × 2500 = 57500 samples.The predictive check of this prior is shown in Fig. 15, which indicates that such a build up prior covers all the observed spectra.The training then takes about ∼ 15 minutes.The generation of the posterior samples takes ∼ 20 seconds.We then fit the data with Bayesian inference with XSPEC.We then derive the errors on the fitted parameters using MCMC.As can be seen, SBI-NPE with amortized inference for the two different restricted priors can follow the spectral evolution along the burst, with an accuracy comparable to XSPEC, even when the number of counts in the spectra goes down to a few hundred, deep into the Poisson regime.The results of our fits are fully consistent with those reported by (Keek et al. 2018;Strohmayer et al. 2019;Yu et al. 2023).This provides further demonstration that SBI-NPE is applicable to real data, and that the power of amortization can still be used for multiple spectra showing wide variability.
Discussion
We have demonstrated the first working principles of SBI-NPE for X-ray spectral fitting for both simulated and real data.We have shown that not only it can recover the same best fit parameters as traditional X-ray fitting techniques, but delivers also healthy posteriors, comparable to those derived from Bayesian inference with XSPEC and BXA.The method works equally well in the Gaussian and Poisson regimes, with uncertainties reflecting the statistical quality of the data.The existence of a known likelihood helps to demonstrate that the method is well calibrated.We have still performed recommended checks, such as Simulation-Based Calibrations (SBC) (Talts et al. 2018): a procedure for validating inferences from Bayesian algorithms capable of generating posterior samples.SBC provides a (qualitative) view and a quantitative measure to check, whether the uncertainties of the posterior are well-balanced, i.e., neither overconfident nor under-confident.SBI-NPE as implemented here passed the SBC checks, as expected from the comparison of the posterior distributions with XSPEC and BXA.
We have shown SBI-NPE to be less sensitive to local false minima than classical minimization techniques as implemented in XSPEC, consistent with the findings of Parker et al. (2022).We have shown that, although raw spectra can train the network, SBI-NPE can be coupled with Principal Component Analysis for reducing the dimensions of the data to train the network, offering potential speed improvements for the inference (Parker et al. 2022).For the simple models considered here, no broadening of the posterior distribution is observed.This kind of approach can be extended to various dimension reduction methods, since SBI-NPE is not bound to any likelihood computation.The latter also means that SBI-NPE could apply when formulating a likelihood is non optimal, e.g. in the case of the analysis of multidimensional Poisson data of extended X-ray sources (Peterson et al. 2004).
Multiple-round inference combined with a restricted prior is perfectly suited when dealing with a few observations.The power of amortization can also be used, even when the observations show large spectral variability, as demonstrated above.The consideration of a restricted prior, either on the interval of counts covered by the observation data sets or from a coarse fitting of the ensemble of observations with a small neural network used upfront, enables to define an efficient training sample, to cover the targeted observations.Note however that the use of a restricted prior can always be compensated by a larger sample size on an extended prior, with the penalty on the training time.The use of a classifier to restrict the range of priors, being easy to implement and running fast, can be coupled with standard X-ray fitting tools to increase their speed and decrease the risk to get trapped into local false minima.
Within the demonstration of the working principles of the technique, we have found that the training time of the neural network is shorter and/or comparable to state of the art fitting tools.Speeding up the training may be possible using larger computer resources, e.g.moving from a laptop to a cluster.On the other hand, once the network has been trained, generating the posterior distribution is instantaneous, and orders of magnitude faster than traditional fitting.This means that SBI-NPE holds great potential for integration in pipelines of data processing for massive data sets.The range of applications of the method has yet to be explored, but the ability to process a large sample of observations with the same network offers the opportunity to use it to track instrument dis-functioning, calibration errors, etc.This Single Round Inference (100000) Multiple round inference (5 x 5000) XSPEC Fig. 14.Left) Posterior distribution comparison between XSPEC spectral fitting and as applied to the persistent emission spectrum of 4U1820-303 (pre-burst).The spectrum is modeled as tbabs * (blackbody+powerlaw).There is a perfect match between the three methods, not only on the best fit parameters but also on the width of the posterior distributions.Right) The count spectrum of the persistent emission, together with the folded model derived from both SBI-NPE and XSPEC.The C-stat of the best fit is indicated together with its deviation against the expected value.
will be investigated for the X-IFU instrument on Athena (Barret et al. 2023).
We are aware that we have demonstrated the working principles of SBI-NPE, considering simple models, spectra with a relatively small number of bins.The applicability of the technique to more sophistical models, higher resolution spectra, such as those that will be provided by X-IFU, will have to be demonstrated, although the alternative tools, such as XSPEC and BXA, may have issues on their own in terms of processing time.Through this demonstration, we have already identified some aspects of the technique to keep investigating.For instance, an amortized network is applicable to an ensemble of spectra that must have the same binning/grouping, the same exposure time. . .The latter could be relaxed if one is not interested in the normalization of the model components (flux), but just the variations of the other parameters (e.g. the index of a power law).Alternatively, the posterior distributions of the normalizations of each of the additive components of the models could be scaled afterwards to account for the different integration times, with respect to the integration time of the simulated spectra used for the training.On the other hand, there is no easy work around the case of spectra having different number of bins.This will require some further investigation.Obviously, there are many cases when meaningful information can still be derived considering similar grouping and integration time for the spectra, as we have shown above in the case of a type I X-ray burst.Once the network has been trained, the generation of posteriors being instantaneous, SBI-NPE makes it possible to track model parameter variations on timescales much shorter than what is possible today with existing tools.
The quality of the training depends on the size of the training sample.Caution is therefore recommended when lowering the sample size to increase the inference speed.It should be stated again, that when using amortized inference, the time to train the network will always be compensated afterwards, by the order of magnitude faster generation of the posteriors for a large number of spectra.There is no rule yet to define the minimum sample size.Similarly, for multiple-round inference, the number of iterations is a free parameter, that may need some fine-tuning to ensure that convergence to the best fit solution has been reached.The existence of alternative robust tools such as XSPEC and BXA will help to define guidelines.For complex multi-component spectra, the size of the training sample will have to be increased.The use of a restricted prior will always help, but considering dimension reduction, e.g.decomposition in principal components, or the use of an embedding network to extract the relevant summary features of the data, may become mandatory.Any loss of information will have an impact on the inference itself.Another limitation may come from the time to generate the simulations to train the network.Here we have used simple models and a fully parallel version of the XSPEC fakeit developed within jaxspec (Dupourqué & al. in preparation).This was not a limiting factor for this work, as generating thousands of simulations takes only a few seconds.
The python scripts from which the results have been derived are based on the sbi package (Tejero-Cantero et al. 2020).With this paper, we release through GitHub, the SIXSA5 (Simulationbased Inference for X-ray Spectral Analysis) python package, from which the working principles of single and multiple round inference with neural posterior estimation have been demonstrated.The python scripts that come with reference spectra are straightforward to use, and can be customized for different applications.We also release a first version of jaxspec to further support the development and use of SBI-NPE for X-ray spectral fitting.Note that jaxspec is currently limited in the number of models available (only basic analytical models, as the ones used here, are available).It remains obviously possible to generate and use simulated spectra produced with software packages like XSPEC.This is encouraged, in particular for testing SBI-NPE with a large number of model parameters.
Conclusions and way forward
We have demonstrated for the first time the working principles of fitting X-ray spectra with simulation-based inference with neural posterior estimation, down to the Poisson regime.We have applied the technique to real data, and demonstrated that SBI-NPE converges to the same best fit parameters and provides fitting errors comparable to Bayesian inference.We may therefore be at the eve of a new era for X-ray spectral fitting, but more work is needed to demonstrate the wider applicability of the technique to more sophisticated models and higher resolution spectra, and in particular to those provided by the new generation of instruments, such as the X-IFU spectrometer to fly on-board Athena.Yet, along this work, we have not identified any show-stoppers for this not to be achievable.Certainly, the pace at which machine learning applications develop across so many fields will also help in solving any issues that we may have to face, strengthening the case for developing further the potential of Simulation-Based Inference with Neural Posterior Estimation for X-ray spectral fitting.May the release of the SIXSA python package help the community to contribute to this exciting prospect.
Acknowledgments
DB would like to thank all the colleagues who shared their unfortunate experience of getting stuck, without knowing, into local false minima, when doing X-ray spectral fitting.The fear of ignoring that the true global minimum is just nearby is what motivated this work in the first place with the hope of preventing sleepless nights in the future.DB is also grateful to all his X-IFU colleagues, in particular from CNES, for developing such a beautiful instrument that will require new tools, such as the one introduced here, for analyzing the high quality data that it will generate.DB/SD thank Alexei Molin and Erwan Quintin for their support and encouragements along this work.The authors are grateful to Fabio Acero, Maggie Lieu, the anonymous referee and the editor for useful comments on the paper.Finally DB thanks Michael Deistler for support in using and optimizing the restricted prior.
Fig. 4 .
Fig.4.A prior predictive check showing that the coarse inference provides good coverage of the reference spectrum (Spectrum 20000 counts).The coverage of the restricted prior is indicated in red.The median of the spectra sampled from the restrictor (blue line) is to be compared with the reference spectrum (green line).
Fig. 5 .
Fig.5.Inferred model parameters versus input model parameters for the case in which spectra have between 10000-100000 counts, spread over 200 bins.A single round inference is performed on an initial sample of 10000 simulated spectra.The median of the posterior distributions is computed from 20000 samples and the error on the median is computed from the 68% quantile of the distribution.The linear regression coefficient is computed for each parameter over the 500 test samples.
Fig. 7 .
Fig. 7. Top) The count spectrum corresponding to the reference absorbed power law spectrum (Spectrum 20000 counts), together with the folded best fit model from both SBI-NPE (green solid line) and XSPEC (blue dashed line).Bottom) The residuals of the best fit from SBI-NPE.
Fig. 8 .
Fig.8.Inferred model parameters versus input model parameters for the case in which spectra have between 1000-10000 counts, spread over 200 bins.The neural network is trained with 20000 simulations and the posteriors for 500 test spectra are then computed.The medians of the posteriors are computed from 20000 samples and the error on the median is computed from the 68% quantile of the distribution.The linear regression coefficient is computed for each parameter over the 500 test samples.
Fig. 9 .Fig. 10 .
Fig. 9.The posterior distributions for a reference spectrum of 2000 counts as derived from SBI-NPE with a single round inference of a network trained with 20000 samples (green) and by BXA in orange.
Fig. 11 .
Fig. 11.Comparison between XSPEC spectral fitting (Fit statistic minimization) and SBI-NPE.The initial parameters of the XSPEC fits are the input model parameter (top panel), a set of parameters generated from the restricted prior (middle panel), and a set of parameters generated from the initial prior (bottom panel).The y-scale is the same for the three panels.Outliers away from the red line are due to XSPEC getting trapped in a local false minimum.
Fig. 12 .
Fig.12.Inferred model parameters versus input model parameters for the case in which spectra have between 1000-10000 counts, spread over 200 bins.The best fit parameters are derived from a single round inference of a network trained by 20000 samples, reduced by the PCA, so that only 90% of the variance in the samples is kept.
Fig. 13 .
Fig.13.The posterior distributions for a reference spectrum of 2000 counts as derived from SBI-NPE with a single round inference of a network trained with 20000 samples (green) decomposed by the PCA (green, dimension reduced by a factor of 3) and XSPEC (blue).
Fig. 15 .
Fig.15.Left) 23 burst spectra covered by the restricted prior, indicated by the region in grey.Right) Recovered spectral parameters with SBI-NPE and XSPEC.The agreement between the different methods is remarkable.The figure shows the number of counts per spectrum (top), the normalization of the power law, the temperature of the backbody in keV, and at the bottom the blackbody normalization translated to a radius (in km) assuming a distance to the source of 8 kpc.Single Round Inference is performed with a training sample of 23 × 5000 spectra, derived from a restrictor, constraining the number of counts in the spectra to be between 100 and 10000 counts (red filled circles).Single Round Inference is also performed with a training sample of 23 × 2500 spectra, generated from a restrictor build from a quick inference (green filled circles).XSPEC best fit results are shown with blue filled circles. | 11,284 | 2024-01-11T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Does Self-Selection Affect Samples ’ Representativeness in Online Surveys ? An Investigation in Online Video Game Research
Background: The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective: Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods: All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results: We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions: Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. (J Med Internet Res 2014;16(7):e164) doi:10.2196/jmir.2759
Introduction
An increasing number of medical and psychological studies are performed through online surveys.Compared with face-to-face interviews, Internet-based surveys can quickly reach more potential participants, reduce measurement error and bias related to answers on stigmatizing topics, and enhance the inclusion of least represented or "quasi-secret" and stigmatized population groups that are usually difficult to reach and recruit [1][2][3][4][5].Costs can be more easily contained with Internet-based surveys, and data collection can be simpler and more reliable compared to traditional paper-and-pencil data entry procedures.Some studies suggest that the quality of the data provided by Internet-based surveys is at least as good as in those collected by traditional paper-and-pencil methods on self-selected samples [6][7][8].Some Web surveys have been based on the assessment of a whole population or on samples obtained using random sampling procedures (ie, a sampling technique whereby all individuals in the population have an equal chance of being selected, eg, emailing a random sample of students in a university).For instance, Internet-based surveys among students enrolled by email have generated valid and reliable estimations of substance use [3,9,10], comparable to those obtained in studies that applied ordinary mail invitation letters or phone calls to recruit participants.
Many Web studies are, however, self-selection surveys [11] that are not based on probability sampling [12], particularly in health-related studies.Websites and online social networks such as Facebook appeared to be a viable recruitment option for the assessment of health behaviors [13,14].However, lack of researchers' knowledge about the website members' contacts leads to the impossibility of obtaining a random sampling.The survey questionnaire is then usually put on the Web.Potential participants are among those people with Internet access who visit the website, find the study information, and decide to complete the survey.In the case of self-selection surveys, the researcher then has no control over the selection process and can work only on the design of the study advertisement (such as graphics and content, length of questions, possible incentives) or on a selection of an appropriate website or forum to promote the response rate to electronic questionnaires [15].Online self-selection surveys are thus particularly subject to coverage and selection bias, which undermines the external validity of studies and the interpretation of findings [12].
Coverage bias is possibly influenced by patterns related to Internet access or to specific website access (ie, differences between people with or without Internet access) and to the possibility of being particularly interested in the study for reasons that may or may not be related to the content and/or objective of the survey itself.Furthermore, exposure to the advertisement is influenced by the time spent on a specific website, and chain sampling bias may also occur because heavy users may be more prone to share information about the study with other contacts than light users [16].
Self-selection bias (individuals who select themselves for the survey) may be of great importance [12], notably, in consideration of the usually relatively low participation rate [16].It is difficult to estimate the impact of any selection bias because information on non-participants is usually not available, and comparisons between the included and the excluded samples are not feasible [12].
There is some evidence that the self-selected samples of Internet-based surveys may systematically differ from samples drawn from the general population using other sampling procedures [16,17].One study showed that an Internet-based study sample had higher past month rates of alcohol and marijuana use than those found in other similar and non-self-selected samples of smokers (behaviors also possibly more easy to disclose online) [17].Similarly, comparison of registry-recruited cancer survivors with an online recruited sample found that the Internet sample has lower social support and greater mood disturbances than the cancer-based registry-recruited one [18].Another study found also that participants who preferred online surveys to paper-pencil questionnaires differed from their counterparts on a number of sociodemographic variables [19].The effect of the self-selection bias is also possibly important in large sample size studies, as suggested from differences in actual election results and a number of online opinion surveys [12].
To the best of our knowledge, studies are lacking on the possible differences between "pure or perfect" random samples and self-selected samples of users of specific Internet services such as online games or social network websites.This weakness is possibly explained by the difficulty for researchers, independently from websites owners, to obtain random samples of Internet users on specific websites.
The online game World of Warcraft (WoW) offers some possibilities to approach the question of selection and self-selection bias in online surveys.WoW gamers have been specifically studied in online self-selected surveys in attempts to assess motivations to play and possible psychological factors associated with gaming addiction [20][21][22][23].
In WoW, players assume the role of a fictional character, or "avatar".An avatar is characterized by a number of elements such as name and visual representation.The avatar's progression is a core attribute of WoW, implying that an avatar will develop new skills and powers as rewards for the success obtained during in-game missions or quests (eg, beating a monster, finding something specific, exploring areas of the game).Each avatar's progression is accessible via the "Armory", an official database reporting the achievements related to each avatar evolving in WoW [24].Players commonly regroup themselves in guilds (hierarchical organizations of avatars with shared objectives and backgrounds).Each guild has its particular regulations.Players who want to join a given guild usually need to contact the guild's chief and explain their motivations to join the guild and to give some evidence that their avatar meets the guild's conditions [22].Furthermore, the psychological characteristics of the gamers, such as motivation to play [21,23,25], have been shown to be associated with actual in-game behaviors and achievements as reported by the Armory scores [22].Accordingly, the Armory scores of a given avatar, to some extent, reflect the game style and commitment of a given WoW player (ie, details of the achievements reached).The variables assessed in the present study were extracted from the armory and could be considered as an "ecological measure" (the measures automatically collected during game play represent direct in-game behaviors) of both the commitment of the players and their playing preferences [22].
WoW thus offers the possibility of comparing characteristics of the progression of self-selected avatars to a "pure or perfect" random sample of avatars.A sample is considered pure or perfect in the sense that every randomly selected avatar is actually included in the sample, whereas in classic studies (ie, non-self-selected samples), subjects are allowed to refuse to participate, which could induce a selection bias.
The aim of the current study was to compare the armory characteristics of two self-selected samples with a random sample of WoW avatars.
Summary
The study compared a random sample of avatars with two different self-selected samples.Only the avatars at the maximum level of the game version at inclusion were included in the study.The mechanics of "leveling" is as follows: When a player starts to play with an avatar, this avatar automatically starts at level 1.While playing, avatars gain experience points and these points allow the avatar to reach new levels (10,000 points to reach level 2; 25,000 to reach level 3, etc).Each new game version allows avatars to gain higher maximum levels.At the creation of the game, the maximum level an avatar could reach was 60.Each time that an expansion pack is released, the highest reachable level is raised (80 for Wrath of the Lich King and 85 for the Cataclysm versions of the game).
As described below, most of the avatars of the self-selected samples were at the maximum level.This level is not the maximum of the possible in-game achievements (reflected by the armory scores) but something like a mandatory "pass" for certain important tasks in the game (especially raids).To be considered as a seriously involved player, an avatar has to reach this maximum.So, including only avatars at this maximum level makes the avatars more comparable in terms of game "involvement".
First Self-Selected Sample
The first self-selected sample (self-selected sample 1) of avatars was from a study on the relationships between players' self-reported motives to play and their in-game behaviors [21].
The study was performed between June and December 2010 and was approved by the ethical committee of the Psychology Department of the University of Geneva.Inclusion criteria were French-speaking WoW players who were aged 18 years or older.Participants were recruited through advertisements posted in dedicated French-language forums: a guilds forum, an official Blizzard WoW forum, and more general online and video games forums.Some participants also joined the study after having heard about it in the local press or from television interviews.All participants gave online consent prior to starting the online survey.So, the sample included the avatars of online gamers who actively participated in the study given the identity of their avatars.Concomitant avatars' in-game behaviors were collected through the French Armory website [24].
The WoW avatar achievements studied here and reported in the Armory (Figure 1) are as follows: general achievements, quests (progression in the various available quests in the game), exploration (exploring each area in the game), player versus player (fighting other players), dungeons and raids (raids or dungeon crawling, ie, specific missions needing a group of players to achieve a common objective), profession, reputation, word events, and total completed.These achievement scores (Figure 1) were reported in the following format: score, maximum score (maximum possible score), and percentage of the maximum score.This percentage is calculated as the ratio of the scores gained by the player to the maximum possible score of the achievement in question multiplied by 100.For instance, a player who has obtained a score of 20 for quests out of a maximum of 127 is credited with a percentage of 15.75.All other percentages considered for the analysis were also calculated in the same way, and their means were compared across samples.Taking the percentage allows comparison of the avatars in different time periods despite possible modification in the WoW game.Some achievements such as "feats of strengths" and "total points" were not expressed as a percentage and were therefore not included in the study because of the difficulty in interpreting these scores in the case of game modifications.
Of a total of 1601 participants who started the survey (self-selection), 690 completed it (43.10%)and concomitantly provided the names of their avatar and the realms in which they play (ie, the name of their server, which is necessary to identify the avatar).Among these avatars, only those with a level of 80 were included in the present study, which represents 663 avatars of 690 (96.1%).This level was the higher one at the time of inclusion of the self-selected sample.
Second Self-Selected Sample
The second self-selected sample (self-selected sample 2) of avatars was from a study performed between December 2011 and April 2012.Similarly, the sample included the avatars of online gamers who actively participated in the study given the identity of their avatars.Concomitant avatars' in-game behaviors were collected in the same way as for the first self-selected sample.
The study, approved by the ethical committee of the Psychology Department of the University of Geneva, had similar purposes to the first study and similar recruitment procedures.The sample was assessed with the same measures from the Armory.One important added value of this smaller sample was the time of recruitment, which took place after the release of the Cataclysm version of the game in December 2010; it is therefore a version of the game similar to the one related to the random sample described in the next section.At that time, the maximum level was raised from 80 to 85. Furthermore, response bias is considered as an "individual" characteristic of a given sample; replication of the results on different samples could be considered as a way to increase the validity of a given study conclusion [26].
In total, 104 participants participated in the survey; of these, 99 avatars (95.2%) had a level of 85 (the maximum) and were included in the present study.
Random Sample of Avatars
The list of avatars was found on a public website of game players [27] on February 25, 2012.On this website [27], each server was presented with numbered avatars.
Due to the fact that the two self-selected samples were recruited among French speaking WoW players, only the French-speaking population of avatars (found on French servers) was considered in order to ensure group comparability.
Given that most of the avatars of the two self-selected samples (96.1% of the first one, 637/663 avatars, and 95.2% of the second one, 94/99 avatars) were at the maximum level of the game version at the time of recruitment, only those avatars were included in our study and compared with a random sample of avatars.For the sake of comparability, all avatars included in the study were at the 85 level, which is the maximum level related to the Cataclysm version.
To form the random sample, 600 avatars were randomly drawn.The number of selected avatars from each server was proportional to the contribution of each server to the total population.The random allocation was made using a specialized website [28].The avatars selected by this procedure were then searched for and assessed in the WoW official comprehensive database [24].Only avatars considered as still active by the WoW game were registered in this database.
Statistical Analyses
Statistical analyses were performed using SPSS, version 18.0.An initial exploratory analysis involved the calculation of percentages, as well as means and standard deviations of the above-mentioned outcome measures.
To address the research question, analysis of variance (ANOVA) or t tests are appropriate in comparing mean percentages across groups.However, although F tests are robust against departure from normality, the homogeneity of variance is a strong assumption that must be satisfied for the ANOVA results to be reliable.As percentages and proportions variables are not likely to meet normality and homogeneity of variance assumptions, the arcsine-transformation is often used with this type of data and serves the purpose of normalizing them and stabilizing their variance.First, all variables were expressed as proportions, that is, between 0 and 1, and then they were transformed according to the following formula: y=2arcsin (SQRT(p)), where p stands for proportion.The combined effect of the square root with the XSL • FO RenderX inverse sine compresses the upper tail of the distribution and stretches out both tails relative to the middle.The ANOVAs and t tests were done on the transformed variables after visual inspections of their normality and Q-Q plots excluded inacceptable patterns.But for the sake of completeness, the tables also display the variables in their original scale.In a first step, the three samples were compared (self-selected sample 1 vs self-selected sample 2 vs random sample) using a one-way between-group ANOVA to explore the impact of each sample on a list of 10 selected variables.In a second step, the two self-selected samples were merged, since they did not differ (shown by post hoc comparison tests), and two-sample t tests were done, comparing the random sample with the merged self-selected samples.To account for multiple comparisons testing (10 multiple ANOVAs and t tests), we performed Bonferroni corrections deflating alpha type I error so that the adjusted significance level is alpha/10 (here .005).Finally, a chi-square test was carried out to compare proportions of avatar per guild between the two types of samples.
Results
Table 1 shows the ANOVA results for each variable of interest.Except for "guild membership", overall, statistically significant differences at the .005level between the three samples were found for all assessed variables.Bonferroni post hoc tests showed that these differences mainly occurred within each self-selected sample compared with the random sample.However, these differences were not statistically significant between the self-selected sample 1 and the random sample for exploration (P=.6), between the self-selected sample 2 and the random sample for total completed (P=.1), and between neither one of the self-selected samples and the random sample for guild (P=1.0 respectively).Comparing the two self-selected samples, Bonferroni's post hoc tests showed that they differed on the following variables: exploration, player versus player, and quests (P<.001 respectively).No further difference was observed for the other variables.We merged these two self-selected samples into one bigger sample, which in turn was compared to the random sample.Table 2 shows the two-sample t tests between the new self-selected sample and the random sample.Both samples differed significantly at the .005level on each variable, except for guild membership and exploration, with the random sample having a similar mean to that of the self-selected sample..5 -0.04 (-0.21 to 0.12) 2.77 (1.01) 2.73 (1.06) Arcsine-transformed value Some guilds were overrepresented in the self-selected samples (a form of guild effect: people from the same guild may encourage their partners to participate in the study).Table 3 shows the number of avatars per guild.The sample sizes here are lower since not every avatar belongs to a guild.The range is between 1 and 11, with one guild from the self-selected sample having 11 participating avatars.A chi-square test reveals that the distribution of avatar per guild is different between the two types of samples.
Principal Findings
In the French-speaking community of WoW players, three samples of avatars, one purely random and two self-selected, were used to assess the potential self-selection bias of Internet-based studies.To our knowledge, this is the first study to include a perfect random sample, since all randomly selected subjects (avatars) were incorporated in the sample.
The method used in this paper is somewhat new, dealing with the opportunity given by the development of online avatars of Internet users.Table 4 gives some details about the similarities and the differences related to Internet surveys, surveys on online gamers, and studies on avatars.
Possible data obtained
The avatar is selected randomly from a database (like for the random sample of our study).
The human user decided to include their avatar in a study (self-selection bias, like for the 2 self-selected samples of our study) Possible self-selection bias.The participant (human) is or is not informed about the survey and decides to or not to participate and to complete the survey.
Possible self-selection bias.The participant (human) is or is not informed about the survey and decides to or not to participate and to complete the survey.
Self-selection bias
The samples were compared on the basis of the in-game achievements of the avatars expressed in percentages.This allows comparison of avatars despite possible game modifications, as shown by the lack of differences between the two self-selected samples recruited at two different times.The second self-selected sample and the random sample were both included during the same Cataclysm version of the game, whereas the first self-selected sample was included before this game version.
According to the hypothesis of a self-selection bias, it appears from the study results that a self-selected sample of website users differs from a "pure or perfect" random sample.The self-selected samples had higher scores than the random sample on most of the assessed in-game behavior variables.The self-selected samples appear to be more involved in the game than the random sample avatars.This could occur for different reasons.
To self-select, a player needs first to see the advertisement for a study (eg, "We are looking for active World of Warcraft players, >18 years old, to participate in an online survey on your motives to play and your psychological profile.We will ask the name of your main avatar and match your answers to Blizzard armory's data.The questionnaire will take approximately 15 minutes of your time").Second, a player needs to consider participating and to agree to it.Therefore, having the will to be involved in a study could lead to the selection of specific subjects with certain characteristics (eg, personality, game involvement, special interest in the purpose of the study).Because the participants responded to an Internet advertisement for the study, highly involved players are more likely to see the ad than occasional players because of the time spent on WoW-related websites.On the other hand, one could also assume that the will to participate is related to both involvement in the game and an interest in the proposed studies.
The study finding is consistent with those of other studies linking survey participation with involvement (greater interest, connection, and concern related to the given behavior or possibly to the study results) in the assessed behaviors [19,29].
One may hypothesize that the statistically significant results of the study were due to the large sample size and that type I error (finding a difference when in fact there is none) could not be ruled out.But the magnitude of the differences found between the groups (as displayed in Table 2) cannot be imputed to chance alone and therefore does not support this point of view.
Limitations
Some limitations warrant further consideration.First, the specificity of WoW, including the guild effect (players organized in a guild), may increase the inclusion of participants who are highly involved in the game via a chain sampling bias.This may limit the generalizability of the results to other domains of Internet use-related behaviors.However, most Internet-related activities involve some form of social networking that promotes chain sampling activities and a possibly similar bias.
Second, the study was done on avatars and not directly on people.People may have more than one avatar on WoW.Thus, we cannot exclude the possibility that the results may be partly explained by differences between the avatar chosen by the self-selected sample as representative and the randomly selected avatars.Inclusion of active avatars at the maximum level related to each game extension and particularly lack of statistically significant differences in guild involvement and exploration (for the first self-selected sample) and on total completed (for the second self-selected sample) suggest, however, that the random sample is composed of at least reasonably credible avatars involved in the game.
Although most of the avatars were affiliated with guilds, affiliation was not an inclusion criterion.However, it was used to assess a form of guild effect (ie, higher proportions of avatars from the same guilds in the self-selected groups in comparison to the random one).Furthermore, guild participation could be considered as a useful index of "serious" avatar in-game activity (ie, the avatar was accepted by a guild).
Conclusions
Because of the important differences between the self-selected samples and the randomly selected sample, and despite the acknowledged limitations, the study invites careful consideration of the conclusions made from online self-selected samples and the possibility of an overrepresentation of subgroups of more involved or more concerned users.Therefore, it does not appear possible to draw general epidemiological conclusions from Internet-based self-selection surveys (eg, on the prevalence of game addiction among website users or the general population).However, the studies may be of high interest to subgroups of users who are more involved in the game and the study purpose.In particular, such studies may allow the linking together of different assessed variables (such as mood, motives, or personality and a given behavior) in the studied sample.This remains important, particularly because of the possible advantages of online studies (eg, large sample sizes, possible access to people who are usually more difficult to reach, access to stigmatized behaviors).
The possible collaboration with webmasters may further improve understanding of the representativeness of self-selected samples by the random selection of the users (ie, contacting users by email to build a random sample as control group) or by comparison of the responders to non-responders regarding general characteristics such as features related to website use or, to some extent, potential biases regarding clinical variables (eg, game addiction).
Figure 1 .
Figure 1.Example from the Armory.
Table 1 .
Comparison of mean values between three samples: self-selected sample 1 (n=663), self-selected sample 2 (n=99), and random sample (n=478) by one-way ANOVA performed on the transformed variables.
Table 2 .
Comparison of mean values between the merged self-selected sample (n=762) and the random sample (n=478) by t test performed on the transformed variables.
Table 4 .
Comparison of Internet surveys, online game surveys, and studies on avatars. | 5,854.8 | 2014-07-01T00:00:00.000 | [
"Computer Science",
"Medicine",
"Psychology"
] |
A Review of Keratin-Based Biomaterials for Biomedical Applications
Advances in the extraction, purification, and characterization of keratin proteins from hair and wool fibers over the past century have led to the development of a keratin-based biomaterials platform. Like many naturally-derived biomolecules, keratins have intrinsic biological activity and biocompatibility. In addition, extracted keratins are capable of forming self-assembled structures that regulate cellular recognition and behavior. These qualities have led to the development of keratin biomaterials with applications in wound healing, drug delivery, tissue engineering, trauma and medical devices. This review discusses the history of keratin research and the advancement of keratin biomaterials for biomedical applications.
Introduction
One of the primary goals of biomaterials research is the development of a matrix or scaffolding system that mimics the structure and function of native tissue. For this purpose, many researchers have explored the use of natural macromolecules due to their intrinsic ability to perform very specific biochemical, mechanical and structural roles. In particular, protein-based biomaterials have emerged as potential candidates for many biomedical and biotechnological applications due their ability to function as a synthetic extracellular matrix that facilitates cell-cell and cell-matrix interactions. Such substrates contain a defined, three-dimensional microstructure that supports cellular proliferation and OPEN ACCESS cell-guided tissue formation, both of which are important characteristics for biomaterial scaffolds. In addition, the strong bioactivities and diverse physiochemical properties of proteinaceous macromolecules are attractive for other biomedical applications for which biocompatibility is essential, such as medical devices, bioactive surfaces, hygiene products, etc.
Several proteins have been investigated in the development of naturally-derived biomaterials, including collagen, albumin, gelatin, fibroin and keratin. Of these, keratin-based materials have shown promise for revolutionizing the biomaterial world due to their intrinsic biocompatibility, biodegradability, mechanical durability, and natural abundance. This review focuses on the history of keratin research and the development of keratin-based biomaterials for biomedical applications. A brief review of keratin biology is also discussed with an emphasis on how the proteins are developed within the hair fiber.
Keratin Biology
The term "keratin" originally referred to the broad category of insoluble proteins that associate as intermediate filaments (IFs) and form the bulk of cytoplasmic epithelia and epidermal appendageal structures (i.e., hair, wool, horns, hooves and nails). Subsequent research of these structural proteins led to the classification of mammalian keratins into two distinct groups based on their structure, function and regulation. "Hard" keratins form ordered arrays of IFs embedded in a matrix of cystinerich proteins and contribute to the tough structure of epidermal appendages. "Soft" keratins preferentially form loosely-packed bundles of cytoplasmic IFs and endow mechanical resilience to epithelial cells [1−3]. In 2006, Schweizer et al. [4] developed a new consensus nomenclature for hard and soft keratins to accommodate the functional genes and pseudogenes for the full complement of human keratins. This system classifies the 54 functional keratin genes as either epithelial or hair keratins. The structural subunits of both epithelial and hair keratins are two chains of differing molecular weight and composition (designated types I and II) that each contain non-helical endterminal domains and a highly-conserved, central alpha-helical domain. The type I (acidic) and type II (neutral-basic) keratin chains interact to form heterodimers, which in turn further polymerize to form 10-nm intermediate filaments. Although hard and soft keratins have closely related secondary structures, distinct differences in amino acid sequences contribute to measurable differences between the filamentous structures. Most notably, hair keratins contain a much higher content of cysteine residues in their non-helical domains and thus form tougher and more durable structures via intermolecular disulfide bond formation [2,5,6].
Hair Keratins
Hair fibers are elongated keratinized structures that are composed of heavily crosslinked hard keratins. Each fiber is divided into three principle compartments: the cuticle, cortex, and medulla. The thin outer surface of the fiber, the cuticle, is a scaly tubular layer that consists of over-lapping flattened cells. The cuticle primarily contains beta-keratins that function to protect the hair fiber from physical and chemical damage. The major body of the hair fiber is referred to as the cortex, which is composed of many spindle-shaped cells that contain keratin filaments. Occasionally, in the very center of the hair fiber is a region called the medulla that consists of a column of loosely connected keratinized cells [7].
Within the cortex of the hair fiber are two main groups of proteins: (1) low-sulfur, "alpha" keratins (MW 40−60 kDa) and (2) high-sulfur, matrix proteins (MW 10−25 kDa). Collectively, the hair fiber consists of 50−60% alpha keratins and 20−30% matrix proteins [7]. The alpha keratins assemble together to form microfibrous structures known as keratin intermediate filaments (KIFs) that impart toughness to the hair fiber. The matrix proteins function primarily as a disulfide crosslinker or glue that holds the cortical superstructure together and are also termed keratin associated proteins or KAPs [4]. In total, there are 17 human hair keratin genes (11 type I; 6 type II) [4] and more than 85 KAP genes [8] that potentially contribute to the hair structure in humans.
Development of Hair Keratins
Hair morphogenesis begins in a proliferative compartment at the base of the hair follicle called the bulb. Within this region, cells divide and differentiate to form the various compartments of the hair follicle. The hair follicle is a cyclic regeneration system comprised of actively migrating and differentiating stem cells responsible for the formation and growth of hair fibers. The follicle undergoes a continuous cycle of proliferation (anagen), regression (catagen), and quiescence (telogen) that is regulated by over thirty growth factors, cytokines and signaling molecules [9,10]. The mature anagen hair follicle contains a concentric series of cell sheaths, the outermost of which is called the outer root sheath (ORS), followed by a single cell layer called the companion sheath. The inner root sheath (IRS) lies adjacent to the companion layer and consists of three compartments: the Henle layer, the Huxley layer, and the IRS cuticle. The hair fiber fills the center of this multilayered cylinder, which is itself divided into cuticle, cortex and medulla [8−10]. As cells within the hair shaft terminally differentiate, they extrude their organelles and become tightly packed with keratin filaments. The cysteine-rich keratins become physically crosslinked upon exposure to oxygen and give strength and flexibility to the hair shaft [10].
Keratin genes have complex, differential, and in many cases sequential expression patterns within the cuticle and cortex of the hair follicle [5,11−14]. For example, only a few keratins are expressed in the hair-forming matrix of the cortex and cuticle, whereas others are sequentially switched on upon differentiation in the lower cortex. The bulk of keratins are expressed in the middle cortex ("keratinizing zone") of the ascending hair fiber. Other keratin expressions are restricted to the hair cuticle and are sequentially expressed during hair morphogenesis [5,13]. The highly regulated expression pattern of keratins during hair morphogenesis is indicative of the functional differences between acidic and basic keratins, although this relationship is not yet fully understood [11,12].
Early Uses of Keratins
The earliest documented use of keratins for medicinal applications comes from a Chinese herbalist named Li Shi-Zhen in the 16 th century. Over a 38-year period, Shi-Zhen wrote a collection of 800 books known as the Ben Cao Gang Mu that describe more than 11,000 therapeutic prescriptions. Among them is a substance made of ground ash from pyrolized human hair that was used to accelerate wound healing and blood clotting called Xue Yu Tan, also known as Crinis Carbonisatus. Although the details about the discovery of the biological activity of human hair are not reported in great detail, its uses for medicinal purposes are clearly documented [15].
The word "keratin" first appears in the literature around 1850 to describe the material that made up hard tissues such as animal horns and hooves (keratin comes from the Greek "kera" meaning horn). At the time, keratins intrigued scientists because they did not behave like other proteins. In particular, normal methods for dissolving proteins were ineffective for solubilizing keratin. Although methods such as burning and grinding had been known for some time, many scientists and inventors were more interested in dissolving hair and horns in order to make better products. The resolution to the insolubility problem came in 1905 with the issue of a United States patent to John Hoffmeier that described a process for extracting keratins from animal horns using lime. The extracted keratins were used to make keratin-based gels that could be strengthened by adding formaldehyde [16].
During the years from 1905 to 1935, many methods were developed to extract keratins using oxidative and reductive chemistries [17−22]. These technologies were initially applied to animal horns and hooves, but were also eventually used to extract keratins from wool and human hair. The biological properties of the extracts led to increased interest in the development of keratins for medical applications, and among the first inventions were keratin powders for cosmetics, composites, and coatings for drugs [23−25].
During the 1920s, keratin research changed its focus from products made from keratin to the structure and function of keratin proteins. Several key papers were published that analyzed oxidatively and reductively extracted keratins [21,22]. These scientists soon concluded that many different forms of keratin were present in these extracts, and that the hair fiber must be a complex structure, not simply a strand of protein. In 1934, a key research paper was published that described different types of keratins, distinguished primarily by having different molecular weights [22]. This seminal paper demonstrated that there were many different keratin homologs, and that each played a different role in the structure and function of the hair follicle.
Keratin Research from 1940−1970
It was during the years of World War II and immediately after that one of the most comprehensive research projects on the structure and chemistry of hair fibers was undertaken. Driven by the commercialization of synthetic fibers such as Nylon and polyester, Australian scientists were charged with protecting the country's huge wool industry. Synthetic fibers were seen as a threat to Australia's dominance in wool production, and the Council for Scientific and Industrial Research (later the Commonwealth Scientific and Industrial Research Organisation or CSIRO) established the Division of Protein Chemistry in 1940. The goal of this fundamental research was to better understand the structure and chemistry of fibers so that the potential applications of wool and keratins could be expanded. Earlier work at the University of Leeds and the Wool Industries Research Association in the UK had shown that wool and other fibers were made up of an outer cuticle and a central cortex. Building on this information, scientists at CSIRO conducted many of the most fundamental studies on the structure and composition of wool. Using X-ray diffraction and electron microscopy, combined with oxidative and reductive chemical methods, CSIRO produced the first complete diagram of a hair fiber [26]. CSIRO scientists also conducted extensive studies on the wool proteins themselves. Many methods for the extraction, separation, and identification of these keratins were developed. Other fundamental studies included wool surface chemistry, processing of products, fellmongering (harvesting of wool from sheep), felting, carbonising, surface treatments, flammability, denaturation, chemical modification, dyeing, photochemical degradation, and application of polymers to wool. This monumental effort was conducted over a period of more than 30 years and resulted in over 660 publications, 20 patents, and three books. In the meantime, the use of oxidative and reductive chemistry to extract keratins from hair fibers was being applied by other scientists across the world. In The Netherlands, researchers patented a method for making films and textile fibers from reductively extracted keratins from ground up hooves [27].
Probably nowhere in the world was keratin research more active than in Japan. Between the years of 1940 and 1970, applications for keratin-based inventions submitted to the Japanese patent office numbered more than 700. This was a renaissance in keratin research that was trending toward the fundamentals of materials science and biomaterials. Driven by the development of reliable methods to solubilize keratins, researchers were beginning to understand the many sub-classes of keratins, and their different properties [28−32]. In 1965, CSIRO scientist W. Gordon Crewther and his colleagues published the definitive text on the chemistry of keratins [7]. This chapter in Advances in Protein Chemistry contained references to more than 640 published studies on keratins.
Keratin Research from 1970-Present
Advances in the extraction, purification and characterization of keratins, led to the exponential growth of keratin materials and their derivatives. In the 1970s, methods to form extracted keratins into powders, films, gels, coatings, fibers, and foams were developed and published by several research groups [33−35]. All of these methods made use of the oxidative and reductive chemistries developed decades earlier, or variations thereof.
The prospect of using keratin as a biomaterial in medical applications was obvious. During the 1980s, collagen became a commonly used biomolecule in many medical applications. Other naturally derived molecules soon followed such as alginates from seaweed, chitosan from shrimp shells, and hyaluronic acid from animal tissues. The potential uses of keratins in similar applications began to be explored by a number of scientists. In 1982, Japanese scientist published the first study describing the use of a keratin coating on vascular grafts as a way to eliminate blood clotting [36], as well as experiments on the biocompatibility of keratins [37]. Soon thereafter in 1985, two researchers from the UK published a review article speculating on the prospect of using keratin as the building block for new biomaterials development [38]. In 1993, a Japanese scientists published a commentary on the prominent position keratins could take at the forefront of biomaterials development [39].
Keratin Biomaterials
The solid foundation for keratin research led to the development of many keratin-based biomaterials for use in biomedical applications. This foundation is based on several key properties of keratins that contribute to the overall physical, chemical and biological behavior of these biomaterials. First, extracted keratin proteins have an intrinsic ability to self-assemble and polymerize into porous, fibrous scaffolds. The spontaneous self-assembly of keratin solutions has been studied extensively at both the microscale [40−42] and macroscale levels [43]. This phenomenon of self-assembly is evident in the highly conserved superstructure of the hair fiber and, when processed correctly, is responsible for the reproducible architecture, dimensionality and porosity of keratin-based materials. In addition, keratin biomaterials derived from wool and human hair have been shown to possess cell binding motifs, such as leucine-aspartic acid-valine (LDV) and glutamic acid-aspartic acid-serine (EDS) binding residues, which are capable of supporting cellular attachment [44,45]. Together, these properties create a favorable three dimensional matrix that allows for cellular infiltration, attachment and proliferation. Like other intermediate filaments, keratins are also believed to participate in some regulatory functions that mediate cellular behavior [46,47]. Thus, the conservation of biological activity within regenerated keratin biomaterials could prove advantageous for the control of specific biological functions in a variety of tissue engineering applications.
The enhanced physical, chemical and biological properties of keratins as well as the desire to exploit wool and human hair fibers as a renewable natural resource have fueled keratin biomaterials research over the past three decades. Much has been done to both fabricate and characterize new keratin-based products such as films, sponges, scaffolds and fibers. In many cases, these novel keratin materials have been shown to possess excellent biocompatibility. In addition, many researchers have discovered methods for modulating the physical and mechanical properties of keratins in order to create biomaterials that have appropriate characteristics for their application of interest.
Keratin Films
The preparation of protein films from keratin extracted from wool and human hair has been used for a number of years to explore the structural and biological properties of self-assembled keratins. Yamauchi et al. [48] were among the first to begin to investigate the properties of products made from extracted wool keratins and in doing so described the physiochemical and biodegradational properties of solvent-cast keratin films. Although pure keratin films were too fragile for practical use, the addition of glycerol resulted in a transparent, relatively strong, flexible, and biodegradable film [48]. In an additional publication, Yamauchi et al. [49] described the cell compatibility of this film by cultivation of mouse fibroblasts on the surface. When compared to the growth of cells on collagen and glass, the keratin substrate proved to be more adhesive to the cells and more supportive of cellular proliferation [49]. Fujii et al. [50] also demonstrated that hair keratins were useful for preparing protein films and described a rapid casting method. This research also revealed the feasibility of incorporating such bioactive molecules as alkaline phosphatase into the keratin films for controlled-release applications. The films, however, had poor strength and flexibility [50]. Together these early studies demonstrated the feasibility of preparing keratin films and demonstrated their potential for use as biomaterials in medical applications.
Like many natural-derived biomaterials, however, the practical use of keratin-based products was ultimately limited by their poor mechanical characteristics. Thus, keratin film research shifted to focus on the optimization of the physical strength and flexibility of films while maintaining their excellent biological activity. Several approaches for controlling the physical and biological properties have been considered, including the addition of natural [51−56] and synthetic [57,58] polymers to keratinblended systems and new preparation techniques for pure keratin films [59,60].
In 2002, Yamauchi's group enhanced the mechanical properties of their glycerol containing keratin films by the addition of chitosan. Chitosan is a well investigated biomolecule for biomaterial applications, and is known to possess high biocompatibility and biological function for wound healing and antibacterial activity. Addition of chitosan into the keratin films resulted in improved mechanical strength. Furthermore, the chitosan-keratin films also demonstrated antibacterial properties and were shown to be good substrates for cell culture [51]. The biological activity of keratin films was also increased by incorporating a cell adhesion peptide, Arg-Gly-Asp-Ser (RGDS), at the free cysteine residues of reduced keratin extracts. RGDS-carrying keratin films proved to be excellent substrates for mammalian cell growth, and this work again demonstrated the potential and versatility of keratin biomaterials [52].
Silk fibroin (SF) is another natural polymer that has received much attention as a biomaterial due to its intrinsic biocompatibility and biodegradability. Keratin-SF films have been studied extensively in order to understand the interactions that occur between the two biomolecules and how they relate to the overall mechanical and biological characteristics of the biomaterial. Lee et al. [53] studied the secondary structure of keratin-SF films and observed a transition from random coil to β-sheet structure for fibroin due to the presence of the polar amino acids present in keratin. These blended films were shown to have enhanced antithrombogenicity properties and increased biocompatibility in comparison to SF or keratin only films [54], most likely due to the enhanced surface polarity of the blends generated by the conformational transformation of the proteins [55]. Vasconcelos et al. [56] further explored the mechanical and degradation properties of keratin-SF blended films and concluded that SF and keratin interactions are not simply additive but rather the two proteins are capable of unique intermolecular interactions that directly affect the bulk properties of the films. Ultimately, the nature and strength of these interactions and knowledge of the degradation rates will allow for the design of matrices for release of active compounds that are suitable for future biomedical applications [56].
In addition to natural biopolymers, the interaction between keratin and synthetic polymers has also been studied [57,58]. Tonin et al. [57] explored the relationship between poly(ethylene oxide) (PEO) and keratin blended films in order to develop a keratin-based material with improved structural properties. Morphological, structural and thermal analyses of the keratin/PEO films revealed that at appropriate levels, keratin inhibits PEO crystallization and PEO interferes with the keratin selfassembly, inducing a more thermally-stable, β-sheet secondary protein structure. The improved structural properties of keratin/PEO blends enables the development of keratin materials for use as scaffolds for cell growth, wound dressings and drug delivery membranes [57]. The intermolecular interactions between keratin and polyamide 6 (PA6) have also been studied with the goal of creating keratin-based materials that have practical use for a wide variety of applications ranging from biomedical devices to active water filtration and textile fibers [58].
In addition to creating blended keratin systems with natural or synthetic polymers, researchers have also investigated alternative fabrication techniques for creating keratin films with more suitable mechanical properties. Katoh et al. [60] reported an alternative method for processing keratin films to overcome the limited versatility associated with solution-cast methods. Compression molding of Ssulfo keratin powder proved to be an effective technique for producing pure keratin films of distinct shape. Control of the mechanical properties of the films was obtained by controlling the molding temperature and water content of the film, and the biocompatibility of the S-sulfo films was also demonstrated by fibroblast attachment and proliferation on the keratin substrates [60]. In a separate study, an improved procedure for preparing pure keratin films with translucent and flexible properties was reported, and the practical application and compatibility of the films were demonstrated by testing their compatibility with human skin [59].
Recently, Reichl et al. [61] characterized two different approaches for substrate coatings and demonstrated the growth behavior of twelve different cell lines cultured on the keratin films. Results showed that growth substrates formed by casting of a keratin nanosuspension supported cell adherence and improved cell growth as compared to uncoated polystyrene or keratin coatings formed by trichloroacetic acid precipitation. The new approach is believed to be a low cost, standardized alternative to commonly used coatings such as collagen and fibronectin [61].
Keratin Sponges and Scaffolds
The ability of extracted keratin proteins to self-assemble and polymerize into complex three dimensional structures has led to their development as scaffolds for tissue engineering. Fabrication of wool keratin scaffolds for long term cell cultivation was first reported by Tachibana et al. [44] in 2001. The matrices were created by lyophilization of aqueous wool keratin solutions after controlled freezing, which resulted in a rigid and heat-stable structure with a homogenously porous microarchitecture. The keratins, which were shown to contain RGD and LDV cell adhesion sequences, exhibited good cell compatibility and supported the attachment and proliferation of fibroblasts over a long-term cultivation period of 23−43 days. In addition, the free cysteine residues present within the scaffold were shown to be potential modification sites for the immobilization of bioactive substances [44]. In later work, lysozyme was used as a model compound and linked to the keratin sponge via disulfide and thioether bonds. Disulfide-linked lysozyme was gradually released over a 21-day period whereas lysozyme linked via thioether bonds was stably maintained for up to two months. This work demonstrated that the selection of a chemical crosslinker can uniquely determine the stability of an immobilized bioactive substance on keratin sponges [62].
Functionalization of active free thiol in the keratin sponges using various chemical treatments has also been demonstrated using iodoacetic acid, 2-bromoethylamine, and iodoacetamide to produce carboxyl-, amino-, and amido-sponges, respectively. These chemically-modified keratin sponges have been shown to mimic extracellular matrix proteins, and the large presence of active groups within the sponges has allowed for further hybridization with bioactive molecules. This technique was demonstrated by Tachibana et al. [63] in 2005 with the hybridization of keratin sponges with calcium phosphate. Two types of calcium phosphate composite sponges were fabricated by either chemically binding calcium and phosphate ions or trapping hydroxyapatite particles within the keratin carboxysponges. Both hybridized materials supported osteoblast cultivation and altered their differentiation pattern based on the expression pattern of alkaline phosphatase [63]. Keratin carboxy-sponges have also been functionalized with bone morphogenetic protein-2 (BMP-2), which was shown to associate tightly within the keratin sponge and to localize the differentiation of preosteoblasts grown with the construct. Cells outside of the BMP-2-loaded construct did not differentiate, suggesting that no significant amount of BMP-2 leaked out and that the effects were confined inside the modified keratin sponge. These findings are significant for in vivo applications because it is expected that the use of these scaffolds will promote internal osteogenesis while avoiding external heterotopic ossification [64].
Regulation of pore size and porosity of keratin scaffolds was achieved by Katoh et al. [65] using a compression molding/particulate leaching (CM/PL) technique. The ability to regulate the pore diameter and interconnectivity of scaffolds for tissue engineering applications is desired for allowing adequate cellular infiltration and nutrient delivery. In addition to having regulated pore size, scaffolds created using the CM/PL method were water tolerable, which presents significant superiority over collagen materials that are soluble in water without the use of UV irradiation or cytotoxic chemical crosslinkers [65].
The in vivo biodegradation of keratin bars was explored by Peplow et al. [66] in order to establish a relationship between mass and physical strength. Rectangular bars of reconstituted keratins were subcutaneously implanted into adult rats, and dry weight and elastic modulus of the explanted bars were monitored over an 18-week time period. The dry weight of the bars decreased gradually with a maximum weight degradation of 22% at 18 weeks. The elastic modulus of the keratin bars decreased abruptly between 3 and 6 weeks accompanied by an increase in the number of fissures and cavitations at the surface of the bars. This gradual degradation and quick loss of mechanical integrity are indications that this form of keratin is more suited as a resorbable implant material to provide scaffolding for non-load bearing applications [66].
The construction, characterization and cytocompatibility of human hair protein scaffolds for in vitro tissue engineering applications has recently been reported by Verma et al. [45]. Keratin proteins extracted from hair were fabricated into porous sponges via lyophilization of frozen protein suspensions. Characterization of the sponges was performed using swelling experiments and morphological assessments made by scanning electron microscopy (SEM), which showed that the sponges were capable of swelling 48% within a period of 60 minutes and that the sponge surface had an average pore diameter of 150 µm. The interconnectivity and pore diameters supported cell attachment and survival. The authors suggest that these scaffolds are prospective materials for tissue engineering applications due to their human origin, biodegradability and cytocompatibility [45].
Keratin Fibers
In recent years, research on the electrospinning of biocompatible polymeric materials has greatly increased due to the abundance of potential biomedical applications for nanofibrous materials. Electrospinning is a technique that utilizes a high voltage to create an electrically charged jet of polymer that is drawn toward a grounded collection plate or mandrel. The resulting fibers have diameters in the nano-to micro-scale range and are randomly arranged to form a non-woven fibrous mat. The enhanced physical configuration (i.e., small pore size, high porosity, three-dimensional features, and high surface area-to-volume ratio) of nanostructured nonwovens promotes cell adhesion and growth, which has led to the development of electrospun membranes for such uses as bandages for wound healing and scaffolds for tissue engineering. Recently, the electrospinning process has also been extended to include regenerated keratin extracted from hair and wool fibers. Due to the intrinsically poor mechanical characteristics of pure keratin, however, many researchers have resorted to the addition of synthetic or natural polymers in order to increase the processability of keratin for fiber formation. Much work has been done to characterize the intermolecular interactions between the keratin and "additive" macromolecule in order to correlate the properties of the blend solution to the properties of the electrospun fibers.
Aluigi et al. [67,68] created keratin/PEO materials by combining aqueous keratin solutions and PEO powder. In the first of two studies, the investigators identified the electrospinning parameters to create defect-free fibrous materials. Blended solutions with a keratin/PEO weight ratio of 50:50 and 7% and 10% total polymer concentrations were shown to have sufficient viscosities to electrospin with few defects. Spectroscopic and thermal analyses indicated that the electrospinning process destabilized the natural self-assembly of keratin and promoted a less complex protein conformation [67]. In further work, keratin and PEO were combined in different proportions in order to correlate the chemical, physical, and rheological properties of the blend solutions with the morphological, structural, thermal and mechanical properties of the electrospun mats. The keratin/PEO solutions were shown to have increased viscosities in comparison to both pure PEO and keratin, and the blends exhibited a non-Newtonian flow behavior with strong shear-thinning properties that were dependent on PEO concentration. The low viscosity of blends with higher keratin content greatly hindered their ability to form fibers; however, solutions with a lower composition of keratin were successfully electrospun without defects. Comparisons between actual and theoretical rheological properties using Graessley's theory showed that the broadening of molecular weight distribution and possible bonding between PEO and keratin macromolecules at certain keratin/PEO ratios are responsible for the shear viscosity behavior of the blends, which ultimately correlate with the morphology of the electrospun fibers [69]. The practical uses of the keratin/PEO nanofibrous mats, however, were ultimately limited by their water instability and poor mechanical properties [68].
Fibroin regenerated from silk has also been used to improve the processability of keratin for electrospinning applications [70]. Characterizations of the rheological behavior of keratin/fibroin solutions revealed macromolecule interactions that promoted the formation of network structures with maximum synergy at a 50/50 (w/w) blend ratio. At this ratio, the synergistic effects on the protein interactions resulted in the formation of smaller-diameter, finer nanofibers as compared to fibers formed using solutions of unequal ratios of keratin/fibroin. Conformational analyses confirmed the prevalence of β-sheet secondary structure in keratin/fibroin films except at the 50/50 blend in which the proteins showed a propensity to assemble in the α-helix-coiled structure. On the contrary, the electrospinning process was shown to induce changes in secondary structure at all blend ratios by preventing β-sheet formation and promoting a random coil or α-form structure. In addition, the αcrystallites formed by electrospinning were shown to be less thermally stable, most likely due to the high rate of fiber formation that limits the molecular rearrangement and crystallization of the keratin chains [70].
Wet-spinning is another fiber-forming technique that has traditionally been used for manufacturing synthetic fibers for the textile industry, but has recently been employed to create single fiber biomaterials. This method involves extrusion of a dope solution through a spinneret into a coagulation bath and subsequent drawing/stretching to promote polymer chain alignment and fiber formation. The physical limitations of keratin materials have hindered the production of pure keratin fibers, yet researchers have overcome these challenges using blends of synthetic and natural polymers with improved material properties.
Katoh et al. [71] improved upon the fiber-forming capabilities of aqueous keratin solutions using poly-(vinyl alcohol) (PVA). PVA acted to increase the viscosity of the spinning dope, which allowed fibers with a keratin content ranging from 13−46% to be spun. Due to the fragility of fibers with high amounts of keratin, the maximum keratin content for sufficient fiber formation was determined to be 30%. This combination of keratin and PVA proved to be advantageous in terms of mechanical strength, waterproof characteristics, and the adsorption of toxic substances. According to the authors, keratin-PVA fibers are expected to have wide-spread industrial applications as absorbents for toxic substances such as heavy metals ions and formaldehyde gas [71].
Wrzesniewska-Toski et al. [72] also employed wet-spinning techniques to create novel fibrous keratin-based materials that have potential application as hygienic fabrics. Keratin extracted from chicken feathers and bio-modified cellulose were combined and used to create fibers that were characterized as having better sorption properties, higher hygroscopicity, and a smaller wetting angle than cellulose-only fibers. Although introduction of keratin into cellulose fibers decreased the mechanical properties, a level was achieved that still enabled their application for manufacturing composite fibrous materials. In addition, the cellulose-keratin fibers had better biodegradation than cellulose fibers.
Keratin Biomaterials in Tissue Engineering and Regenerative Medicine
Much work has been done to fabricate and characterize keratin-based materials and to demonstrate their cytocompatibility and biodegradation. Until recently, however, few of these biomaterial developments had been applied in models of tissue regeneration.
Sierpinski et al. [73] and Apel et al. [74] demonstrated that keratin-based hydrogels were neuroinductive and capable of facilitating regeneration in a peripheral nerve injury model in mice. Human hair keratins enhanced the in vitro activity of Schwann cells by inducing cellular proliferation and migration, and by upregulating expression of specific genes required for important neuronal functions. When translated into a mouse tibial nerve injury model, keratin gel-filled conduits served as a neuroinductive provisional matrix that mediated axon regeneration and improved functional recovery compared to sensory nerve autografts [73]. In another study, the time course of peripheral nerve regeneration was evaluated with respect to neuromuscular recovery and nerve histomorphometry. Keratin-filled hydrogels were shown to accelerate nerve regeneration as evidenced by improved electrophysiological recovery and increased axon density at early time points. This early development of neuromuscular contacts resulted in more functional connections with the target muscle that in turn promoted increased axon myelination at six months. The authors concluded that these results showed that keratin-based scaffolds made from human hair can facilitate peripheral nerve regeneration and promote neuromuscular recovery that is equivalent to the gold standard, sensory nerve autografts [74].
Keratin hydrogels derived from human hair have also been shown to act effectively as a hemostatic agent in a rabbit model of lethal liver injury. In comparison to other commonly used hemostats (QuickClot ® and HemCon ® bandage), the keratin hemostatic gel improved 24 hr survival and performed consistently as well, if not better than, conventional hemostats in terms of total blood loss and shock index. The keratin gel used in these experiments acted on the injury site by instigating thrombus formation and by forming a physical seal of the wound site that acted as a porous scaffold to allow for cellular infiltration and granulose tissue formation [75]. The ability for keratin-based biomaterials to be translated into the human clinical setting is dependent on further research to elucidate the mechanisms by which these materials regulate hemostasis and nerve regeneration.
Conclusions
It would appear that keratin biomaterials have been in the collective conscience of materials researchers for many decades, yet there are no keratin biomaterials currently in clinical use. This comprehensive review has shown an impressive level of activity, diversity, and ingenuity, albeit at a relatively low level compared to other mainstream biomaterials. Keratin biomaterials possess many distinct advantages over conventional biomolecules, including a unique chemistry afforded by their high sulfur content, remarkable biocompatibility, propensity for self-assembly, and intrinsic cellular recognition. As these properties become better understood, controlled and exploited, many biomedical applications of keratin biomaterials will make their way into clinical trials. | 8,005.2 | 2010-02-01T00:00:00.000 | [
"Biology",
"Materials Science",
"Engineering"
] |
Hasok Chang on the nature of acids
For a period of several years the philosopher of science Hasok Chang has promoted various inter-related views including pluralism, pragmatism, and an associated view of natural kinds. He has also argued for what he calls the persistence of everyday terms in the scientific view. Chang claims that terms like phlogiston were never truly abandoned but became transformed into different concepts that remain useful. On the other hand, Chang argues that some scientific terms such as acidity have suffered a form of “rupture”, especially in the case of the modern Lewis definition of acids. Chang also complains that the degree of acidity of a Lewis acid cannot be measured using a pH meter and seems to regard this as a serious problem. The present paper examines some of these views, especially what Chang claims to be a rupture in the definition of acidity. It is suggested that there has been no such rupture but a genuine generalization, on moving from the Brønsted-Lowry theory to the Lewis theory of acidity. It will be shown how the quantification and measurement of Lewis acidity can easily be realized through the use of equilibrium theory and the use of stability constants.
Introduction
Hasok Chang is without a doubt one of the finest historians and philosophers of science working today. He generally focuses on the scientific details rather than retreating into abstract metaphysics or analytical philosophy of science. He is also the author of several books including Inventing Temperature, for which he was awarded the prestigious Lakatos award as well as Is Water H 2 O? among others (Chang 2004. Chang is also the initiator of a brand of the history and philosophy of science that seeks to expand scientific knowledge itself, which he calls ''complementary science''. This project aims to give a novel function to history and philosophy of science, without denying its traditional role. Although a philosopher of physics by training, Chang has made a number 1 3 of 'excursions' into chemistry (Chang 2016). The present article will focus on one of these chemical studies, namely his writings on acidity.
Chang generally shows a remarkable attention to the scientific details of fields such as thermometry, the scientific revolution and electrochemistry. But I believe that he may be imposing his philosophical view onto the science in some cases, and that he is perhaps being selective of the parts of science that support his philosophical and historiographical approach.
Chang on acids
Chang believes that there is what he calls a "rupture" between the way in which acids were conceived according to the Arrhenius and the Brønsted-Lowry theories, on one hand, and the Lewis theory of acidity on the other hand. Chang also rather strenuously rejects the notion that Lewis' theory of acidity represents a generalization of the Brønsted-Lowry definition.
Let me begin by reviewing the three elementary definitions of acidity. According to Arrhenius' theory, an acid is any substance that forms H + ions in aqueous solution. HCl for example is regarded as an acid, because on reacting with water it forms H + ions, Brønsted and Lowry, both of whom worked on the physical chemistry of solutions, generalized this definition so that an acid is a substance that donates H + ions to any polar solvent and even in the absence of a solvent. The following are examples of each of these types of reactions.
In the first of these two reactions, HCl acts as an acid while CH 3 COOH, whose common name is acetic acid, is actually acting as a base. The notions of acidity and basicity are therefore seen to be relational, in that no single substance may be said to be an acid or a base in all circumstances.
The second example contains no solvent whatsoever. According to the Arrhenius definition, this example would not therefore be classified as an acid-base reaction, whereas according to the Brønsted-Lowry definition HCl is acting as an acid since it can donate protons, or H + ions, to ammonia.
Thirdly, there is a definition by G.N. Lewis, who had a broader vision of chemical phenomena than Brønsted and Lowry, and whose initial concern was the application of Gibbs' thermodynamics to non-ideal solutions. According to the Lewis' theory, an acid is an electron pair acceptor whereas a base is an electron pair donor as shown in the following example, This reaction clearly does not fall within the earlier definitions of acidity since no transfer of H + ions takes place. Another example of a reaction that falls under the Lewis definition of an acid-base reaction, but not the two earlier definitions is, Neither the BF 3 molecule in the first example, or the Cr 3+ ion in the second, are donating protons and yet are considered as acids in Lewis' definition since they each accept a pair of electrons.
But Chang strongly disagrees with the generally held view that the Lewis definition subsumes the earlier ones, or that it represents a true generalization of them. In addition, Chang sees a rupture between the Brønsted-Lowry and Lewis definitions. Here is what he has written, I am almost inclined to say that the two concepts are incommensurable. It might be sufficient, for present purposes, to say that the Lewis and the Brønsted-Lowry definitions refer to two different sets of chemical substances; there is an overlap between the two sets, but one is not a subset of the other (Chang 2012, 694). Perhaps the most popular story told by good chemists is that the Lewis definition encompasses the Brønsted-Lowry definition, that it is a generalization of the latter, because a proton donor is also capable of accepting an electron pair. But I have my doubts about this. Consider the reaction of hydrochloric acid and sodium hydrox-ide… But how would the same reaction be understood from the Lewis point of view? Does HCl accept a pair of electrons from NaOH? That is not obvious since the HCl molecule does not have an empty orbital into which to accept an electron pair (Chang 2012, 693-694) I believe there are two errors in the second quotation. First of all, HCl accepts a pair of electrons from the OH − ion, not simply from NaOH as Chang writes. Secondly, the author fails to mention that the H + ion contains an empty 1 s orbital which does allow it to readily accommodate a pair of electrons.
In the following passage Chang seems to partly acknowledge his earlier oversight when he says, At any rate, nearly all of the HCl in an aqueous solution will be dissociated into H + and Clions, so what must happen is that the H + ion accepts the electron pair from But then what is acidic is the H + ion, not HCl as a substance or a molecule, which is contrary to the Brønsted-Lowry concept (and to common parlance) (Chang 2012, 694).
As a matter of fact, according to Brønsted-Lowry, HCl (g) or "HCl as a molecule", is not acidic. It is only acidic when it reacts with water. Matters would have been clearer if Chang had perhaps written, whereas the over-simplified equation that appears in the second of the quotations above may give the impression that HCl is incapable of acting as an acid, the more correct version, that includes the aqueous solvent, emphasizes that it is only HCl in water that is acidic. Chang may perhaps be unaware of the fact that HCl in gas form is not acidic. As every high school student learns, HCl the gas is neutral whereas aqueous HCl is acidic, as can be demonstrated in the classic fountain experiment (Fig. 1).
One cannot help wondering why Chang appears to show such nostalgia for acids that only form H + ? Chang also appears to hold a parallel nostalgic view regarding such entities as phlogiston in his book on water . If the motivation is an urge to highlight the continuity in scientific development, then I am in full agreement and in fact would wish to go a good deal further in emphasizing continuity and incremental steps in scientific development rather than any forms of Kuhnian revolutions or ruptures (Scerri 2016).
However, it would appear that Chang may also be siding with Kuhn in the case of acidity, in choosing to focus on rupture, whereas he has also frequently written about the virtue of retaining scientific terms (Chang 2011). Fig. 1 The fountain experiment in which water containing litmus indicator is made to enter the glass bulb filled with HCl gas
The pH meter
One of the main reasons that Chang cites for his reluctance to accept that Lewis achieved a true generalization would appear to be that Lewis acidity cannot be quantified by means of a pH meter. In the same article he writes, At this point there may be a strong temptation to get back to something more certain and sensible like measurement to anchor the meaning of acidity, rather than seeking security in ever-changing theories… We do have a widely used measure of acidity in the form of pH, but I will argue that it is not a measure entirely fit for grounding the concept of acidity in its theoretical or empirical aspect. (Chang 2012, 695). I believe this may be an example of putting one's philosophical views ahead of the science. Chang has a long-standing and well-known penchant for pragmatism, operationalism and experiments in general, and what would appear to be a certain disdain for theories in science.
Returning to his views of acids we also read, …pH only measures Brønsted-Lowry acidity and has no clear connection to Lewis acidity. This is of course understandable, given that the definition and measurement of pH by Sørensen … dates back to 1909, more than a decade before Lewis articulated his theory of acids." (Chang 2012, 696).
However, this is not why pH does not apply to Lewis acids. The measure of pH obviously measures H + concentration and is consistent with how Brønsted and Lowry define acids in terms of the formation of H + ions. Since Lewis' definition does not involve H + ions, one would not expect his concept of acidity to be quantifiable through pH measurement.
Chang, presses on by saying, History aside, this situation raises a scientific and philosophical difficulty: even if we assume that all Brønsted-Lowry acids are Lewis acids, it is certainly not the case that all Lewis acids are Brønsted-Lowry acids; therefore, there are Lewis acids that lack any precise quantitative measure empirically (Chang 2012, 696).
I believe this to be a non-sequitur. The fact that "not all Lewis acids are Brønsted-Lowry acids" does not immediately imply that Lewis acids "lack any precise quantitative measure". The fact that the acidity in the Cr 3+ reaction, mentioned earlier, cannot be quantified by means of the pH meter does not imply that it's acidity cannot be quantified tout court. This feature does not refute or threaten the Lewis definition of acidity in any way.
Equilibrium theory
There is a perfectly good approach to the quantification of Lewis acidity which Chang appears not to be aware of. Lewis acidity can be quantified through the well-known use of a stability constant for any reaction, which is given by the expression below, in which square brackets denote the concentration of each chemical species.
The greater the value of the equilibrium constant K, the greater the acidity of the Cr 3+ ion in this case. Said otherwise, the greater the magnitude of the equilibrium constant, the more the position of equilibrium is said to lie towards the right side. The individual values that contribute to this constant can very readily be measured, but as far as I am aware, Chang has never so much as ever mentioned stability constants in all of his writings on Lewis acids. Moreover, the same general approach, of chemical equilibrium theory, can be applied to the earlier definitions of Arrhenius and Brønsted-Lowry. There are no Kuhn losses therefore on moving to quantifying acidity by appeal to the concept of equilibrium. 1 In the case of a strong acid such as aqueous HCl for example, one could quantify the degree of acidity or ionization in an analogous fashion to the way that the Cr 3+ case was handled above. Consider the reaction below, The equilibrium constant is given by the expression, However, reactions of this kind involving strong acids proceed towards the right to such as extent that the equilibrium constant can be said to be effectively infinite. Equilibrium constants for strong acids are not therefore cited in the literature. Strong acids are simply fully ionized such that the denominator approaches zero and consequently the right-hand side of the above expression, and hence the equilibrium constant, approach infinity.
Matters are different for weak acids, such as Chang's favorite example of acetic acid, for which the reaction with water and the equilibrium constant are given by the following expressions, Unlike aqueous HCl in the previous example, acetic acid is only weakly ionized and has a Ka value of approximately 10 -5 at room temperature and pressure. In order to calculate the pH of a molar or 1.00 M solution of acetic acid one needs to perform a simple equilibrium calculation as shown below, One can obtain the value of x by solving a quadratic equation. The pH can then be obtained by taking the negative logarithm to base 10 of the value of x. There is no need to use a pH meter in cases such as these, provided that the concentrations of the relative chemical species can be measured. 2 Sometimes a little theory can go a long way. To summarize, the use of equilibrium theory allows one to quantify acidity in Arrhenius, Brønsted-Lowry and even Lewis acids, whereas the use of pH meters only applies to Arrhenius and Brønsted-Lowry acids. This feature supports the notion that Lewis acidity is indeed a generalization of the earlier definitions and certainly not a case of "rupture".
In textbook presentations of this topic a Venn diagram is often produced to make precisely this point, and to illustrate the gain in generality on moving from the Arrhenius, to the Brønsted-Lowry theory and on to the Lewis theory. In fact, the increasing generality of definitions of acidity has now extended even further than Lewis' definition such as is shown in the Venn diagram in Fig. 2.
For example, Hall, clearly asserts clearly asserts that Lewis' theory is a genuine generalization of that of Brønsted, Ed. 1940, 124-128 Readers will observe that this system [Lewis] includes all the acids and bases of the Brønsted system and no other bases, while it points out a host of new acids (including most cations) which the Brønsted system does not recognize as such (Hall 1940, 127).
There is a striking analogy here with the changing definitions of oxidation and reduction that have arisen through the history of chemistry, a fact that was even recognized by Brønsted, about 100 years ago (Brønsted 1923).
Initially oxidation meant the combination of oxygen with any particular element. The same term was later applied to mean the reaction of any given element with any highly electro-negative element, of which oxygen is just one particular example. Even later the modern definition of oxidation became expressed in terms of electrons, or the lingua franca of chemistry. Whereas oxidation is any process which results in a loss of electrons, reduction represents the opposite trend, namely the gain in electrons. As in the case of Lewis acidity, nobody doubts that this development represents a gain in generality, rather than any form of rupture. In the words of Lewis, To restrict the group of acids to those substances which contain hydrogen interferes as seriously with the systematic understanding of chemistry as would the restriction of the term oxidizing agent to substances containing oxygen (Lewis 1938).
Non-ideality in solutions
Philosophers of science may well be familiar with the concept of an ideal gas and the accompanying ideal gas equation whereby PV = nRT. Such ideal gases have often featured in discussions of scientific models as have discussions of non-ideal gases and the use of the Van der Waals equation (Mizrahi 2012;Woody 2013).
However, philosophers of science are generally not aware of an equally well-developed subject of non-ideal solutions. Consider for example a solution of an ionic substance such as sodium chloride in water. Let us also assume that this solution is of concentration 0.10 molar. Solutions of ionic substances invariably behave non-ideally because their ions are not hard spheres that merely collide with each other. The fact that they have electrical charges immediately introduces forces of attraction, which are ignored in the case of ideal solutions just as they are for ideal gases. In the case of non-ideal solutions, the departure from ideality is actually more complicated. In addition to the forces of attraction between oppositely charged ions, some ions are also surrounded by those of the opposite charge, which tends to reduce the attraction between the original positive and negative ions. Furthermore, not all ions behave similarly since the attraction depends on the charges on the ions which typically have values of ± 1, ± 2 or ± 3.
Consider for example a situation in which 100 ions of Na + are present, of which 25 are hydrated, meaning they are surrounded or shielded by water molecules. As a result, not all Na + ions are said to be "active" and the nominal concentration of Na + ions becomes somewhat irrelevant. One must appeal to the effective concentration or, to use the technical term, to the activity of the solution instead of its concentration. The definitive treatment of this subject was published as long ago as 1907 by none other than G.N. Lewis. The topic is invariably treated in advanced courses in thermodynamics, physical chemistry and analytical chemistry (Atkins et al. 2018;Harris 2020). Applications of the concept of activity range from electrochemistry to the behavior of biological cells in biophysics.
Returning to the subject of pH, this is of course very much an ionic process since it concerns H + ions that are produced by acids according to both the Arrhenius and the Brønsted-Lowry definitions. The more general expression for pH which is expressed in terms of activities is, rather than the more elementary expression of, In the case of very dilute solutions the two expressions lead to approximately the same numerical value for pH, but this is not so in cases of more concentrated solutions. Table 1 shows a comparison between experimentally measured pH values, which depend on activity values, as opposed to values calculated on the basis of the concentration of various solutions of the typical strong acid HCl.
The Similarly, a recent website on the analytical chemistry of water states that, The definition of pH first introduced by Sørensen (the concept that pH is determined by hydrogen concentration) was therefore partly amended as science advanced. However, his definition confers advantages in terms of practical usage, and the corresponding amendment does not downgrade its biological and chemical significance. Advances in thermodynamics and practical methods of pH measurement have played an important role in the process of this redefinition. For this reason, from the point of view of the engineers who use pH, it can still be said that "the Father of pH" is a title that Sørensen deserves. Sørensen's first definition is still used in basic general chemistry courses, in order to make the concept easier to understand. Note that the theoretical definition of pH uses the extremely difficult concept of activity, as shown here (Horiba website).
The authors then proceed to give the rigorous version of what pH actually measures, in terms of activity rather than concentration.
Thermodynamic activity
Chang quotes Bates, an expert of pH measurement as saying, With the perfection of chemical thermodynamics, it became evident that Sørensen's experimental method did not, in fact, yield hydrogen ion concentration....[The numbers obtained] were not an exact measure of the hydrogen ion activity …" (Bates 1930 to which he adds the further remark, All in all, the correspondence between the theoretical notions of acidity and the methods of its measurement has been, and continue to be, less than tight (Chang 2012, 697).
Thermodynamic activity, as mentioned in the previous section, is a technical term that Chang does not seem to be familiar with. This quantity is expressed in the following formula and is expressed in the following formula in which the term in square brackets is the concentration of any substance C and γ is its associated activity coefficient.
Thermodynamic activity does not just mean 'acting as an acid' as Chang seems to believe. It is a term that was introduced, by Lewis in 1907, in order to account for the rather specific phenomenon of the hydration of ions and their resulting ionic strengths.
In elementary treatments of chemical equilibrium such as in a typical reaction, The equilibrium constant K is expressed by the formula, A more general approach consists of replacing each of these concentrations by their thermodynamic activities to give, The various activity coefficients, or γ's are calculated according to the Debye-Hückel equation of, where α is the size of the hydrated ion in picometers, μ is the ionic strength, z is the charge of the ion. Meanwhile, when Chang cites the term "activity" as used by Bates, he seems to assume its everyday meaning, in the sense of the way that acids act, rather than the technical thermodynamic sense that Bates is referring to. What Bates is discussing is that the more accurate definition of pH, that takes account of activities rather than concentrations, assumes the form of, rather than the more familiar version of as mentioned in the previous section of the present article.
The more accurate treatment recognizes the difference between concentration of chemical species as compared with their activities. In the case of dilute solutions the difference is of little significance. For example, a 0.025 molar solution of HCl has a pH of 1.60 without any correction for activity coefficients, whereas it has a value of 1.66 if activity it taken into account. As mentioned in the previous section, this is not the case for concentrated solutions for which it is essential to utilize activities instead of values of concentration.
Chemical bonding and electron pairs
Let me return to Lewis' definition in order to explain why this is not only a genuine generalization of the previous definitions of acidity, but also part of a much greater development in the history of chemistry and a major unification that was initiated by Lewis. This too is an area that Chang does not broach, while taking Lewis acidity out of context, as I see it.
Lewis is responsible for introducing the view that chemical bonding, and chemical reactivity in general, are primarily concerned with pairs of electrons. His definition of acidity, which Chang objects to, should be seen in this wider context of the development of the central concepts of structure and bonding and not in isolation. (Lewis 1923).
Lewis is famously remembered for having proposed the idea that a covalent bond consists of a shared pair of electrons. This has little to do with the very limited insight that the Arrhenius and Brønsted-Lowry theories of acidity have to offer and everything to do with Lewis' theory of acidity. Moreover, Lewis is equally remembered for having stressed that covalent and ionic bonding lie at opposite ends of a continuum of bonding types. Ionic bonding represents a case of very unequal sharing of electron pairs rather than a categorically different species of bonding. Furthermore, all the quantum theories of chemical bonding that were subsequently developed, including valence bond theory and molecular orbital theory, have maintained the notion of electron pairs as being central to an understanding of chemical bonding. Here is how Robert Kohler expressed the importance of Lewis' work in his classic article on the history of Lewis' account of the chemical bond, The first satisfactory picture of the chemical bond was proposed early in 1916 by Gilbert N. Lewis (1875Lewis ( -1946, the American physical chemist better known to some for his work on thermodynamics. His book, Valence and the Structure of Atoms and Molecules (1923), which elaborated the picture of the bond as a shared pair of electrons, was the textbook of the new generation of mechanistic chemists. Without Lewis's conception of the shared pair bond, the interpretation of reaction mechanisms already begun by the English school of A. Lapworth Now a pair of electrons should lead to the mutual repulsion of two particles of like charge. Lewis grappled with this notion from the beginning of his work. First, he proposed that perhaps Coulomb's law might break down in the microscopic realm.
Secondly, Lewis began to suggest that the opposite magnetic properties of a pair of electrons might be responsible for overcoming the mutual repulsion between them. Many of these ideas have been retained in some form or other, following the advent of quantum theories of bonding. Electrons were found to be characterized by two possible spin quantum numbers. Moreover, the notion of pairs of electrons is retained in the quantum notion of an atomic orbital containing just two electrons with opposite spin quantum numbers, as dictated by the Pauli principle.
Needless to say, the quantum theoretical approach provides a quantitative account of chemical bonding through the mechanism of electron exchange energy, which serves to stabilize a molecule. Nevertheless, the iconic idea of pairs of electrons at the heart of chemistry retains its validity. Nor was Lewis' idea confined just to inorganic chemistry, since it also had a profound influence on the development of physical organic chemistry at the hands of Robinson, Ingold and a host of other chemists up to and including Roberts, Woodward and Hoffmann in more recent times (Laidler 1993;Brock 1993).
Lewis also developed a simple 'back of the envelope' method of discovering the number and nature of the bonds as well as lone pair electrons in any given molecule. These Lewis structures remain useful to the present day, and continue to form part of the general chemistry teaching curriculum and for good reason. Although there are cases in which this approach breaks down or gives incorrect predictions, it remains as part of the staple diet of working chemists as an immensely practical way of thinking about chemical bonding that requires absolutely no quantum mechanics or computation.
On the basis of the Lewis structure of any molecule and using the equally classical and non-quantum mechanical approach known as valence shell electron pair repulsion (VSEPR) method, chemists can also predict the 3-D shape of most molecules to a considerable degree of accuracy. From the shape of the molecule one can go on to predict whether any particular molecule might have a net dipole or not, a fact that can serve to explain all manner of other properties of molecules. For example, the bent shape of the water molecule explains why it has a net dipole which in turn explains why it is capable of dissolving ionic salts, why it has an anomalous boiling point and so on.
Of course, these classical approaches to bonding which are the direct outcome of Lewis' notion of identifying bonding with electron pairs can sometimes fail, but they remain as very useful methods that can yield rationalizations and even predictions about any given molecule.
Returning to Lewis' definition of acidity, it is not only a genuine generalization of the previous definitions of acidity, but also part of a much greater unification between various central ideas in chemistry including structure and bonding as well as chemical reactivity in general that was initiated by Lewis.
Some genuine philosophical issues concerning Lewis acidity
One genuine concern with the Lewis definition, which has been known since the inception of his definition, is that acids have different relative strengths depending on the base with which they react, an issue that is not discussed by Chang incidentally.
As a result of this apparent disadvantage, no unique order of acid strengths can be formulated within the Lewis definition. This feature is due to the fact that acidity becomes a response function for Lewis. That is to say a substance is acidic or basic depending on what substance it is chemically related to. 3 Nevertheless, many approaches have been devised including the use of the SbCl 5 affinity scale, the BF 3 affinity scale, various thermodynamic and spectroscopic scales and gas phase affinity scales (Laurence et al. 2011). Indeed, the existence of all of these additional approaches to quantifying acidity serve to further refute Chang's claim that Lewis acidity is not capable of being quantified.
Conclusions
I claim there is no rupture between the Brønsted-Lowry and Lewis conceptions of acidity contrary to what Chang believes. The fact that a pH meter cannot be used to measure the degree of acidity in the case of some Lewis acids does not imply an absence of possible means of measuring acidity. A simple appeal to the principles of chemical equilibrium provides us with the well-known concept of stability constants for the formation of acid-base complexes such as in the reactions of transition metal ions with a set of ligands.
Lewis' theory of acidity has the advantage of being centered on the concept of pairs of electrons which are also essential to the discussion of all chemical reactions and the formation of chemical bonds. Only by ignoring these other uses of the concept of electron pairs can Chang create the illusion that Lewis' definition of acidity is somehow inferior to the earlier more classical definitions in terms of the transfer of H + ions.
Moreover, the concept of electron pairs retains its central importance in the quantum theories of bonding, namely molecular orbital theory and valence bond theory. To downgrade Lewis' definition of acidity because it deals with electron pairs rather than protons amounts to also downgrading huge swathes of modern chemistry, such as equilibrium thermodynamics as well as classical and quantum theories of chemical bonding.
Lewis unified our understanding of acids and bases, together with reactions that lead to covalent bond formation in general in that they all involve electron pairs. According to Lewis, an acid accepts both the electrons in a pair in the process of forming a dative bond. Meanwhile, a hydrogen atom reacting with a bromine atom, for example, do so by each providing one electron in the shared pair in a typical covalent bond. Dative bonds and typical covalent bonds are thereby regarded as variations on the same theme.
All these unifications achieved by Lewis, are indirectly undermined by Chang's sustained attack on Lewis acidity. Instead of looking at Lewis acids as 'failed acids', we should consider the situation the other way round. Lewis acids (dative covalent bonding formation) form a subset of covalent bonding in general. Like oxidation and reduction, the modern view of acids and bases transcends the layperson's view and it is puzzling to read about Chang's nostalgia for acids in the layperson's sense of the term?
As was emphasized above, HCl is not intrinsically acidic. It only becomes acidic on reacting with water or another polar solvent. It is perfectly consistent to consider H + as a Lewis acid and Lewis' definition is a genuine generalization of that of Brønsted & Lowry. As was argued earlier, and contrary to Chang's claims, there is no "rupture" or incommensurability. Our inability to measure Lewis acidity with a pH meter is neither here nor there given that Lewis acidity can be quantified, as can Arrhenius and Brønsted-Lowry acidity, by appeal to equilibrium theory and stability constants.
Activity is a technical term used to characterize the non-ideal behavior of ions. This is a well-understood phenomenon and does not point to any aspect that is "less that tight", to cite Chang's words one again. Electron pairs, which lie at the heart of Lewis' theory of bonding as well as his definition of acids, rather than protons, are the key to understanding bonding, both classically and quantum mechanically.
Acid-base behavior emerges as just one kind of chemical reaction among many other types that involve electron pairs and is thus placed into a wider context by means of Lewis' definition. As was suggested earlier, Lewis achieved the unification between acid-base reactions and reactions involved in bonding in general. Perhaps Chang should consider the advantages of this profound unification rather than claiming that there is dis-unity and rupture in our current knowledge of acids.
Finally, there is an important logical point. Any concept refers to a set of items and a definition of the concept intends to identify all the members of that set. In turn, generalizing a concept implies enlarging the set of the items referred to by the concept. Furthermore, a concept can be generalized in two ways. One is by adding a new property to the property which originally identified the set. Another way is by changing the property that identifies the members of the set.
The case of acidity corresponds to the second way, in that the property that identifies acid substances in Lewis' theory is different from the property that identifies acid substances in Brønsted-Lowry theory. Perhaps it is this fact that leads Chang to talk about "rupture." However, this does not mean that the two definitions are incommensurable. Since the set of acid substances identified by Brønsted-Lowry definition is a subset of the set of acid substances identified by Lewis definition, as shown in Fig. 2, then Lewis' definition is a generalization of the Brønsted-Lowry definition.
In logical language, one can say that, strictly speaking, there are two concepts whose intensions are different, but whose extensions are related by inclusion. In simpler words, all substances that are acid according to Brønsted-Lowry definition are also acid according to Lewis definition, but not vice versa. But, if this is the case, Chang's claim, as cited earlier, that.
… the Lewis and the Bronsted-Lowry definitions refer to two different sets of chemical substances; there is an overlap between the two sets, but one is not a subset of the other.
is simply incorrect. This comment does not depend on particular views about the real nature of acidity or about which of the two definitions is 'better', but is a logical point, which I believe serves to strengthen the present critique. 4 | 8,023.4 | 2022-05-31T00:00:00.000 | [
"Philosophy"
] |
Fast Radio Bursts in the Disks of Active Galactic Nuclei
Fast radio bursts (FRBs) are luminous millisecond-duration radio pulses with extragalactic origin, which were discovered more than a decade ago. Despite the numerous samples, the physical origin of FRBs remains poorly understood. FRBs have been thought to originate from young magnetars or accreting compact objects (COs). Massive stars or COs are predicted to be embedded in the accretion disks of active galactic nuclei (AGNs). The dense disk absorbs FRBs severely, making them difficult to observe. However, progenitors ejecta or outflow feedback from the accreting COs interact with the disk material to form a cavity. The existence of the cavity can reduce the absorption by the dense disk materials, making FRBs escape. Here we investigate the production and propagation of FRBs in AGN disks and find that the AGN environments lead to the following unique observational properties, which can be verified in future observation. First, the dense material in the disk can cause large dispersion measure (DM) and rotation measure (RM). Second, the toroidal magnetic field in the AGN disk can cause Faraday conversion. Third, during the shock breakout, DM and RM show non-power-law evolution patterns over time. Fourth, for accreting-powered models, higher accretion rates lead to more bright bursts in AGN disks, accounting for up to 1% of total bright repeating FRBs.
In magnetar models, based on the difference in emission regions (Zhang 2020), it can be roughly divided into 'close-in' scenario ('pulsar-like' emission originates from the magnetosphere of magnetars, e.g., Yang & Zhang 2018;Kumar & Bošnjak 2020;Lu et al. 2020) and 'faraway' scenario (gamma-ray bursts like, 'GRB-like' emission originates from relativistic outflows, e.g., Lyubarsky 2014;Beloborodov 2017;Metzger et al. 2019).Alternatively, FRBs have been argued to arise possibly from collisions of pulsars with asteroids or asteroid belts (Geng & Huang 2015;Dai et al. 2016).Luo ★<EMAIL_ADDRESS>al. (2020b) found that the polarization position angle (PA) of FRB 180301 swings across the pulse profiles, which is consistent with a magnetospheric origin.Recently, the swing of PAs has also been reported in simulations of relativistic magnetized ion-electron shocks (Iwamoto et al. 2024).For the FRB which has been active for ten years (Li et al. 2021a), the energy budget for the magnetar's magnetic energy becomes strained especially for the low-efficiency relativistic shock origin (Wu et al. 2020).One way to alleviate this problem is to seek other central engines.The close-in models are only applicable to magnetar engines, while the far-away models are applicable to a wider range of scenarios as long as energy can be injected into the surrounding medium from the central engine.Accreting-powered models from compact objects (COs), e.g., black holes (BHs) and neutron stars (NSs), have been proposed (Katz 2020;Deng et al. 2021;Li et al. 2021b;Sridhar et al. 2021;Sridhar & Metzger 2022).
COs are predicted to be embedded in the accretion disk of active galactic nuclei (AGN).Magnetars could form from the core-collapse (CC) of massive stars.Massive stars in AGN disks are more likely to evolve rapid rotation (Jermyn et al. 2021), and eventually more magnetars are produced via the dynamo mechanism (Raynaud et al. 2020).In some cases, a magnetar can also be formed in the following process: binary neutron star (BNS) mergers (Dai & Lu 1998;Rosswog et al. 2003;Dai et al. 2006;Price & Rosswog 2006;Giacomazzo & Perna 2013), binary white dwarf (BWD) mergers (King et al. 2001;Yoon et al. 2007;Schwab et al. 2016), accretion-induced collapse (AIC) of WDs (Nomoto & Kondo 1991;Tauris et al. 2013;Schwab et al. 2015) and neutron star-white dwarf (NSWD) mergers (Zhong & Dai 2020).The AGN disk provides a favorable environ-ment for these processes (e.g.McKernan et al. 2020;Perna et al. 2021;Zhu et al. 2021b;Luo et al. 2023).On the other hand, the accretion rate onto COs can be hyper-Eddington (e.g.Wang et al. 2021a;Pan & Yang 2021b;Chen et al. 2023), which would release enough inflow energy for single FRB or successive ones (Stone et al. 2017;Bartos et al. 2017).Meanwhile, the accreting COs can potentially launch relativistic jets (e.g.Tagawa et al. 2022Tagawa et al. , 2023;;Chen & Dai 2024), providing an optimistic channel to drive FRBs.Overall, FRBs are expected to be produced by both young magnetars and accreting COs in AGN disks.Alternatively, FRBs have been argued to arise possibly from collisions of pulsars with asteroids or asteroid belts.
In this paper, we investigate the generation and propagation of FRBs in AGN disks.Schematic diagrams of our model are shown in Figure 1, and related physical processes are shown in Figure 2.This paper is organized as follows.We assume that young magnetars or accreting COs in AGN disk can emit FRBs, and then progenitors' ejecta or outflows of the accretion disk interact with the disk material to form a cavity.The existence of the cavity can reduce the absorption of materials in the dense disk, making the FRB easier to observe.If the feedback of the ejecta or outflows is weak, the DM, absorption and RM from the disk are presented in Section 2. For FRBs from the magnetosphere of young magnetars, the AGN disk environments do not affect the radiation properties.The propagating effects from the cavity opened by progenitors' ejecta are presented in Section 3.For accreting-powered models, the burst luminosity depends on the accreting rate.The hyper-Eddington accreting of CO disks in AGN disks makes FRB brighter.In addition to burst luminosity, propagation effects from the cavity opened by outflows from CO disks are given in Section 4. The inflections on turbulent disk materials and magnetic field governed by a dynamo-like mechanism in AGN disk are discussed briefly in Section 5. Finally, conclusions are given in Section 6.In this work, we use the expression = /10 in cgs units unless otherwise noted.
DM AND RM CONTRIBUTED BY AGN DISKS
In this work, we investigate the observational properties of FRBs in two different disk models, SG model (Sirko & Goodman 2003) and TQM model (Thompson et al. 2005).The disk structures of the SG/TQM model are presented in Appendix A. Laterally propagating signals are easily absorbed by dense materials.Therefore, we can only receive signals from the vertical direction.The vertical structure of AGN disks has a Gaussian density profile where 0 () is the mid-plane disk density and is the scale height, which is given by the solution of the disk model (see Appendix A).
For the inner disk, the free-free absorption optical depth is extremely high at the midplane of the disk due to dense ionized gas.We can only detect FRBs from source sites at a few scale heights, e.g. for the source at ∼ 10 −3 pc, the disk becomes optically thin only for ℎ ≳ 3.9 (Perna et al. 2021).However, the temperature is lower for the outer disk.The temperature for an SG disk is ∼ 10 3 − 10 4 K when > 10 4 , where ≡ 2 / 2 is the gravitational radius of the SMBH.While for a TQM disk, it is just ∼ 10 2 − 10 3 K.If there is no extra ionization process (Ultraviolet/X-ray photons or shocks), gases are neutral at such temperatures.However, gases may also be ionized by radiation (Dyson & Williams 1980) or shocks associated with transients in AGN disk, such as supernova explosions (Grishin et al. 2021;Moranchel-Basurto et al. 2021;Li et al. 2023b), accretion-induced collapse of an NS (Perna et al. 2021) or a WD (Zhu et al. 2021b) and accretion outflow feedbacks of compact stars (Wang et al. 2021a;Chen et al. 2023), gamma-ray bursts/kilonovae (Zhu et al. 2021a;Ren et al. 2022;Yuan et al. 2022), and gravitationalwave bursts (Wang et al. 2021b).In this section, we assume that the radiation or shocks simply provide additional ionization and do not need to be associated with FRBs.In some models, FRBs are associated with young magnetars or accreting COs and we investigate the feedback (progenitors' ejecta or accreting outflows) in sections 3 and 4 in detail.
Considering the ionization process in detail is complicated, we simply discuss two extreme cases (Perna et al. 2021): the material is fully ionized and there is no extra ionization source in the vertical direction.FRBs are dispersed or absorbed by the disk plasma (see Section 2.1).The magnetic fields in the disk cause Faraday rotationconversion (see Section 2.2).
DM and the optical depth from the disk
If a FRB source is located at height ℎ, the DM from the disk for the ionized outer disk is where erfc is the complementary error function and ĥ = ℎ/.The mean molecular weight = 0.62 is taken in this work.If the FRB source is located in the midplane of the AGN disk, DM can be estimated as The free-free absorption optical depth is () = ∫ LOS ff ()d (Rybicki & Lightman 1986), where ff () = 0.018 −3/2 2 i e i −2 ḡff is the free-free absorption coefficient with ḡff ∼ 1 being the Gaunt factor.Here we assume e ∼ i and i ∼ 1.For the vertical direction, the free-free absorption optical depth is where 0 is the free-free absorption optical depth from the midplane of the AGN disk (5) DMs for different disk models are shown in Figure 3.The free-free absorption optically thick region for a signal with a frequency of 1 GHz is shown in gray.Due to absorption, FRBs can be observed only from source sited at a few scale heights.For the AGN disk of an SMBH with = 10 8 ⊙ , the largest DM contributed by the disk is about 10 4 − 10 5 pc cm −2 .Although such value is larger than any DM of FRBs known to date, there are still possibilities that it can be detected by radio telescopes such as CHIME, which can detect DMs up to ∼13,000 pc cm −2 (CHIME/FRB Collaboration et al. 2018).For an SMBH with = 4 × 10 6 ⊙ , the largest DM contributed by the disk is about 10 3 pc cm −2 .It is similar to the DM of FRB 20190520B (Niu et al. 2022).
Faraday rotation-conversion from the disk
The radiation transfer equation of the Stokes parameters is (Sazonov 1969;Melrose & McPhedran 1991) where , = , , , are the emission coefficients, , = , , , are the absorption coefficients and , = , , are the Faraday rotation-conversion coefficients.If there are no extra emission and absorption processes during the propagation of FRBs, the total intensity is conserved and can be assumed to be unity.Thus, the transfer equation can be simplified to The linear polarization degree is = √︁ 2 + 2 , the circle polarization degree is and the total polarization degree is =
√
2 + 2 .The Faraday rotation rate of the electromagnetic wave with the an-
Escape Absorbed
Strong Feedback
Escape Absorbed
Accreting COs
DM, Faraday rotationconversion
Long-term unchanged Time-dependent Figure 2. Related physical processes of FRBs in AGN disks.FRBs are thought to have originated from active magnetars or accreting compact COs.Progenitors' ejecta or accretion disk outflows interact with the disk material to form a cavity.If the feedback of the ejecta or outflows is weak, whether the FRB is absorbed or not depends only on the location of the source.In this case, the DM and RM from disk material are stable in the long term but fluctuate in the short term due to the turbulence.The toroidal magnetic field in the AGN disk can cause Faraday conversion.If the feedback is strong, as the cavity expands, at some point the optical depth drops below unity, allowing the FRB to be observed.The cavity expansion causes time-dependent DM and RM.For accreting-powered models, the burst luminosity depends on the accreting rate.The hyper-Eddington accreting of CO disks in AGN disks makes FRB brighter.gular frequency is (Gruzinov & Levin 2019) and the Faraday conversion rate is where B , B , B are the three components of the magnetic field unit direction vector B. = √︁ 4 2 / and = / are plasma and Larmor frequencies, respectively.
Another description of the transfer Equation ( 7) is in the vector space: where = (, , ) and = , , .The geometric interpretation of Equation ( 10) is the Faraday rotation-conversion (or generalized Faraday rotation) on the Poincaré sphere.The Faraday rotation or Faraday conversion angle is which are only relevant to the properties of the Faraday screen.The importance of FR and FC can be evaluated by the radio of Faraday conversion rate and Faraday rotation rate If ∼ ∼ , we have / ≪ 1, which means that the Faraday rotation dominates.In most cases, the Faraday conversion is negligible unless the magnetic field has a significant vertical component (Melrose & Robinson 1994;Melrose 2010;Gruzinov & Levin 2019), e.g., the magnetic field reversal region in a binary system (Wang et al. 2022;Li et al. 2023a;Xia et al. 2023) or a quasi-toroidal magnetic field in a supernova remnant (SNR; Qu & Zhang 2023).
The magnetic field of the gas in the AGN disk can be estimated by the ratio of gas pressure to magnetic pressure For the very strong magnetization disk, B ∼ 10, while for the very weak magnetization levels, B ∼ 10 5 (Salvesen et al. 2016).The magnetic field of the AGN disk has both toroidal and poloidal components.If we set the direction of the toroidal field is the -axis and the direction perpendicular to the disk midplane (line of sight direction) is the -axis.In this work, we only consider Faraday rotation DMs form the AGN disk.The free-free absorption optically thick region for a signal with a frequency of 1 GHz is shown in gray.Due to absorption, FRBs can be observed only from source sited at a few scale heights.For the AGN disk of an SMBH with = 10 8 ⊙ , the largest DM contributed by the disk is about 10 4 − 10 5 pc cm −2 .For an SMBH with = 4 × 10 6 ⊙ , the largest DM contributed by the disk is about 10 3 pc cm −2 .It is almost the same as the DM of FRB 20190520B (Niu et al. 2022).
and Faraday conversion in the case where poloidal and toroidal fields dominate and we assume that B does not change with height.
For the poloidal field, the parallel component is B , where 0 is the location of the source.If the outer disk is fully ionized, for an FRB emitted at a height ℎ from the midplane, its RM is where RM 0 is the RM from the midplane RMs from the weak magnetization AGN disk ( B ∼ 10 4 , Salvesen et al. 2016) are shown in Figure 4.The free-free absorption optically thick region for a signal with a frequency of 1 GHz is shown in gray.For the AGN disk of an SMBH with = 10 8 ⊙ , the largest RM contributed by the disk is about 10 4 − 10 5 rad m −2 , which is in the same order of magnitude as the FRB with the extreme RM, e.g., FRB 20121102A (RM∼ 10 5 rad m −2 ; Michilli et al. 2018;Hilmarsson et al. 2021b) and FRB 20190520B (RM∼ 10 4 rad m −2 ; Anna-Thomas et al. 2023).For an SMBH with = 4 × 10 6 ⊙ , the largest RM contributed by the disk is about 10 2 − 10 3 rad m −2 , which is consistent with the RM of most FRBs.
If there is no extra ionization source, whether the FRB is absorbed or not depends only on the optical depth of the disk.For the TQM disk, the disk becomes optical thin at the distance ∼ 10 3 − 10 4 , where the density is about 10 −10 g cm −3 and the temperature is about a few thousand Kelvin.At this temperature, the ionization degree of RMs contributed by the weak magnetization AGN disk ( B ∼ 10 4 , Salvesen et al. 2016).The free-free absorption optically thick region for a signal with a frequency of 1 GHz is shown in gray.For the AGN disk of an SMBH with = 10 8 ⊙ , the largest RM contributed by the disk is about 10 4 − 10 5 rad m −2 , which is in the same order of magnitude as the FRB with the extreme RM, e.g., FRB 20121102A (RM∼ 10 5 rad m −2 ) and FRB 20190520B (RM∼ 10 4 rad m −2 ).For an SMBH with = 4 × 10 6 ⊙ , the largest RM contributed by the disk is about 10 2 − 10 3 rad m −2 , which is consistent with the observed RMs of most FRB. the gas is extremely low.For example, for a gas at 3000 K, the ionization degree given by the Saha equation is ∼ 10 −8 .Although the DM contribution from the disk is negligible (DM∼ 0.5 pc cm −2 ), FC may still occur for toroidal fields.By solving Equation (7), the polarization properties as a function of height are shown in the left panel of Figure 5.We assume that the intrinsic radiation of the source is 100% linearly polarized.When radiation passes through a disk with a toroidal magnetic field, the total polarization degree (, shown in the orange line) remains unchanged, and linear polarization (, shown in the blue line) and circular polarization (, shown in the green line) are converted into each other.In the 1-1.5 GHz band, the changes in linear and circular polarization are shown in the right panel of Figure 5, which is similar to the polarization properties of FRB 20201124A (Xu et al. 2022).
Magnetars could form from the CC of massive stars or compact binary mergers.Here we give the feedback of progenitors' ejecta in AGN disks.Considering that the shock propagation distance in the vertical direction may be much larger than the disk height, in addition to the disk material, the ejecta also interact with the surrounding The polarization properties vary with height (left panel) and different bands (1-1.5 GHz, right panel) when radiation passes through a disk with a toroidal magnetic field.The intrinsic radiation of the source is 100% linearly polarized.When radiation passes through a disk with a toroidal magnetic field, the total polarization degree (, shown in orange lines) remains unchanged, and linear polarization (, shown in blue lines) and circular polarization (, shown in green lines) are converted into each other.
Table 1.Disk Parameters of Different Models at 0 = 1 pc.material.Assuming the circum-disk material has a power-law density profile, the disk and circum-disk material can be modeled as (Zhou et al. 2023) where 0 () is the mid-plane disk density given in the solution of the disk model (see Appendix A), and = 1.5 − 3 is the powerlaw index (Zhou et al. 2023).The critical height ℎ c represents the boundary between the disk and the circum-disk material.The critical density c = 0 () exp −ℎ 2 c /2 2 is the boundary density at ℎ c .Observationally, the critical height is difficult to constrain.In this work, we assume that the transition between the disk and the circumdisk material occurs when (ℎ)/ 0 ∼ 10 −3 .
The cavity evolution
After SN explosions or the merger of two COs, forward and reverse shock are generated during the interaction between the ejecta and the circumstellar medium (CSM).The DM and RM from the ejecta and the shocked CSM have been well studied before (Yang & Zhang 2017;Piro & Gaensler 2018;Zhao et al. 2021;Zhao & Wang 2021).
When the swept mass is much smaller than the ejected mass, the shock evolution is in the free expansion (FE) phase with the velocity The shock velocity (top panel) and radius (bottom panel) evolution for 0 = 1 pc, 0 = 10 51 erg, ej = 10 ⊙ and = 2.The disk parameters are taken from Model c in Table 1.Timescales of FE duration, breakout and leaving the disk boundary are shown in gray vertical solid, dashed, and dashdotted lines, respectively.The height expansion (blue lines) goes through the Free expansion phase (phase I, ≤ FE ), the ST phase (decelerated by disk medium, phase II, FE < ≤ bre ), the Sakurai accelerating phase (phase III, bre < ≤ c ) and the ST phase (decelerated by circum-disk medium, phase IV, c < ≤ cav ) in sequence.The width expansion (red lines) goes through the Free expansion phase (phase I, ≤ FE ), the ST phase (phase II, FE < ≤ bre ) and the SP phase (phase V, > bre ) in sequence.
sh = 0 ≡ √︃ 0 / ej , where 0 and ej are the explosion energy and the ejecta mass, respectively.When the swept mass is comparable to the ejected mass, the shock evolution enters the Sedov-Taylor (ST) phase sh ∝ −3/5 (Taylor 1946;Sedov 1959).The duration of the FE phase is In previous studies, the contribution to DM and RM from both shocked ejecta and shocked ISM has been considered because the age of the source is unknown (Piro & Gaensler 2018;Zhao et al. 2021;Zhao & Wang 2021).However, in the AGN disk, since the FE phase (ejecta-dominated phase) lasts only a few years, we only consider the evolution of the forward shock.
The conditions for shock breakout can be roughly estimated when the vertical propagation distance is comparable to the height of the disk H ≃ (Moranchel-Basurto et al. 2021;Chen et al. 2023).After the breakout, the shock propagates in a region where the density drops sharply, and the shock is accelerated (Sakurai accelerating phase sh ∝ − , see Sakurai 1960).All the above processes can be described by the following equation (Matzner & McKee 1999) where = 0.19 (Matzner & McKee 1999), is the swept mass in the vertical direction.The FE phase, the ST phase and the Sakurai accelerating phase work when sw ≪ ej , sw ≫ ej and (ℎ) ≫ (0), respectively.The breakout timescale is When the shock reaches a critical height ℎ c , the acceleration stops and is decelerated by the circum-disk material.The shock evolution re-enters the ST phase for > c , where c and c is the shock velocity and time at ℎ = ℎ c .
Although the size of the radial propagation of the shock is much smaller than that of the vertical propagation, the radial propagation determines how long the cavity exists.When ≤ bre , the shock also experiences the FE and ST phase in the radial direction The radial distance is usually much smaller than the location of the FRB source 0 , which makes radial density changes across the disk negligible.Therefore, the swept mass in the radial direction is sw () = 4/3 3 d ( 0 ).Also, since the density does not change much, the Sakurai accelerating phase is not considered in the radial direction.When the shock breaks out, the cavity depressurizes and the SNR becomes the shape of a ring-like shell.Then, the shock evolution enters a momentum-conserving snowplow (SP) phase When the shock decelerates to the speed of the local sound, the shock evolution ends.The radial width of the cavity in the AGN disk is The cavity formation timescale can be calculated from Equation ( 22) In summary, shock velocity is given by and The shock radius can be obtained by solving Equations ( 25) and ( 26).The radial and vertical shock evolution for the SNe explosion model with 0 = 1 pc, 0 = 10 51 erg, ej = 10 ⊙ and = 2 is shown in Figure 6.The disk parameters are taken from Model c in Table 1.Timescales of FE duration, breakout and leaving the disk boundary are shown in gray vertical solid, dashed, and dash-dotted lines, respectively.The height expansion (blue lines) goes through the Free expansion phase (phase I, ≤ FE ), the ST phase (decelerated by disk medium, phase II, FE < ≤ bre ), the Sakurai accelerating phase (phase III, bre < ≤ c ) and the ST phase (decelerated by circum-disk medium, phase IV, c < ≤ cav ) in sequence.The width expansion (red lines) goes through the Free expansion phase (phase I, ≤ FE ), the ST phase (phase II, FE < ≤ bre ) and the SP phase (phase V, > bre ) in sequence.
Finally, the cavity is refilled by the AGN disk material with the speed of the local sound, and the refill timescale is estimated by
DM and RM Variations
The long-term monitoring of DM and RM of repeating FRBs can reveal environments of the magnetar, so it is necessary to show the time-dependent DM and RM.The contributions from the disk are given before and are unimportant after the shock breakouts.Thus, we consider the contributions of DM mainly from two regions: the unshocked cavity and the shocked shell.In the shocked region, some shock energy converts into magnetic field energy.Therefore, RM is only contributed by the shocked shell.Before the shock breakouts, the difference in expansion between the radial and vertical directions can be ignored, so the cavity can be regarded as spherical (see Figure 6).However, after the shock breakouts, because the density of gas in the AGN disk decreases rapidly in the vertical direction, the vertical propagation of the shock becomes easier.At this point, the shape of the cavity deviates from a spherical shape.For simplicity, we assume that the cavity is always a cylinder during expansion and the cavity volume is cav = H 2 W , where H and W are the solutions on Equations ( 25) and ( 26).The time-dependent DM from the cavity is where is the thickness of the shock and is the ionization fraction.The shock thickness 1 − ∼ 0.9 is taken, which is consistent with results given by self-similar solutions of the SNR (Chevalier 1982).From Equation (28), we find that DM cav only depends on the radial expansion.For the FE phase, the cavity size is much smaller than the disk height ( W ≈ H ≪ ).At this time, the disk density can be regarded as a constant (see Equation ( 1)).Thus, DM from the cavity evolves as DM cav ∝ −2 W ∝ −2 .The approximate scaling laws for the remaining two phases in Equation ( 28) can be obtained in the same way.
The time-dependent free-free absorption optical depth from the cavity is 3.2.Solid and dashed lines represent the case of SG and TQM disk models, respectively. ff,sh = 1 are shown in gray dashed horizontal lines.For SG-NSWD and SG-BNS models with = 10 8 ⊙ , the optical depth is still greater than unity when the cavity stops expanding.In other cases, the cavity becomes transparent sometime after the magnetar is born.For binary mergers or AIC progenitors, this time is hundreds to thousands of years for the SG disk model, while it only takes a few decades for the TQM disk model.For massive star progenitors, the time become transport is about a thousand years for the SG disk model, while it is shortened to a few hundred years for the TQM disk model.
For the unshocked ejecta, a low ionization fraction = 0.1 and temperature ej = 10 4 K is taken (Zhao et al. 2021).Except for the Sakurai accelerating phase ( bre < ≤ c ), where H depends on the numerical solution of Equation ( 25), the approximate scaling laws for the remaining phases is given in Equation ( 29).
For the shock region, the temperature can be estimated from where B is the Boltzmann constant and the shock velocity sh is given in Equation ( 25).The high temperature makes gas fully ionized.The density of the shocked matter is sh = 4 cd for strong shock waves.The time-dependent DM from the shocked shell is The time-dependent free-free absorption optical depth from the shocked shell is The magnetic field in the shocked region is where B is the magnetic energy density fraction and th = 9 cd 2 sh /8.The RM from the shocked shell is
The influence of the progenitors
The DM, ff and RM evolution of the SNe model are shown in Figure 7.The parameters are the same as Figure 6. ).Although the material in the AGN disk is very dense, as the cavity expands, it can become transparent around a thousand years after the birth of the magnetar.
Although the merger dynamics are complicated, we can think of mergers as an explosion like a supernova in the long term, ejecting a certain amount of energy and material into the surroundings (Zhao et al. 2021).Due to the lack of observations, the mass and energy of the ejecta after compact binary mergers are taken from the results of the numerical simulation (Dessart et al. 2007;Bauswein et al. 2013;Radice et al. 2018;Zenati et al. 2019; see Table 3.2).
The DM, free-free absorption optical depth and RM evolution of different magnetar formation channels are shown in Figure 8.The parameters and the evolution timescale of each model are listed in Table 3.2.Solid and dashed lines represent the case of SG and TQM disk models, respectively. ff,sh = 1 are shown in gray dashed horizontal lines.For SG-NSWD and SG-BNS models with = 10 8 ⊙ , the optical depth is still greater than unity when the cavity stops expanding.In other cases, the cavity becomes transparent sometime after the magnetar is born.For binary mergers or AIC progenitors, this time is hundreds to thousands of years for the SG disk model, while it only takes a few decades for the TQM disk model.For massive star progenitors, the time become transport is about a thousand years for the SG disk model, while it is shortened to a few hundred years for the TQM disk model.
ACCRETING COMPACT OBJECTS IN AGN DISKS
The close-in models are only applicable to magnetar engines, while the far-away models apply to a wider range of scenarios as long as energy can be injected into the surrounding medium from the central engine.Motivated by the periodic repeating FRBs (Chime/Frb Collaboration et al. 2020;Rajwade et al. 2020;Cruces et al. 2021), accreting-powered models from COs in ULX-like binaries have been proposed (Sridhar et al. 2021).In this model, FRBs are generated via synchrotron maser emission from the short-lived relativistic outflows (or "flares") decelerated by the pre-existing (or "quiescent") jet.For the BH engine, if the spin axis is misaligned with the angular momentum axis of the accretion disk, Lens-Thirring (LT) precession makes the FRB periodic.FRB from the precession jet encounters the disk wind which contributes variable DMs and RMs.The synchrotron radio emission from the ULX hypernebulae has been proposed to explain the persistent radio source (PRS) of FRBs (Sridhar & Met-zger 2022), e.g., FRB 20121102A (Chatterjee et al. 2017) and FRB 20190520B (Niu et al. 2022).
To explain some most luminous FRBs, the COs should be undergoing hyper-Eddington mass transfer from a main-sequence star companion.We would like to point out that mass inflow rates of COs in the AGN disks are also possible to be extremely hyper-Eddington (Chen et al. 2023).In this section, we investigate the accretingpowered models of FRBs in the AGN disks.In this work, we focus on the influence of the AGN disk environment, such as the burst properties and variable DMs and RMs from the disk wind (ignore the precession).Other properties can be found in Sridhar et al. (2021); Sridhar & Metzger (2022).
Burst properties
In this section, we briefly outline the intrinsic properties of FRBs from accreting COs (taking a BH as an example) in AGN disks.The extremely high accreting rate of COs in the AGN disk environment (see Section 4.2 in detail) has a great impact on the burst luminosity and peak frequency.When a BH is rotating, the spin energy can be extracted by forming ultra-relativistic jets (i.e., Blandford-Znajek (BZ) mechanism Blandford & Znajek 1977).The BZ luminosity is (Tchekhovskoy et al. 2011) where BZ is the jet efficiency, = / Edd is the the dimensionless accretion rate, Edd = Edd / 2 is the Eddington limit accretion rate and Edd = 4 / = 1.26 × 10 39 erg s −1 /10 ⊙ is the Eddington limit accretion luminosity.The maximum jet efficiency is max ∼ 1.4 for an extreme rotating BH with the dimensionless spin parameter = 0.99 (Tchekhovskoy et al. 2011).For an accreting NS, the jet power can also be extracted from the rotational energy via a BZ-like mechanism (Parfrey et al. 2016).Taking ∼ max ≃ 1, the maximum isotropic-equivalent FRB luminosity can be estimated by where CO is the CO mass, is the radio emission efficiency (for synchrotron maser scenarios, ∼ 10 −3 , e.g., Sridhar et al. 2021) and is the beaming fraction. acc is the efficient accretion timescale of COs (see Section 4.3).As mentioned before, FRBs are powered by the sudden accretion flare with the luminosity f = • 2 .The flare ejecta propagates into the cavity of the quiescent jet with the luminosity and bulk Lorentz factor being q = q • 2 and Γ q , respectively.For FRBs to escape, the quiescent jet should have a large bulk Lorentz factor Γ q ≳ 100 and a low jet efficiency q ≪ 1 (Sridhar et al. 2021).During the interaction between the flare ejecta and the gas in the quiescent jet, a forward shock generates with a radius (Sari & Piran 1995) where f is the duration of burst.The blast Lorentz factor Γ sh is given by pressure balance (Beloborodov 2017) Electrons at the FRB emission radius FRB in the shocked gas gyrate with the Larmor radius L = Γ q e 2 / q , where q ≃ q q / 2 FRB 1/2 is the lab-frame measured magnetic field in high magnetization ( q ≳ 1) upstream.The peak frequency from synchrotron maser emission is (Gallant et al. 1992;Plotnikov & Sironi 2019) FRBs have been detected from 110 MHz (Pleunis et al. 2021) to 8 GHz (Gajjar et al. 2018), which is about the same as the estimated frequency for Γ q ∼ 100 and q ∼ 0.01.When < acc , the CO accretion rate in AGN disks is extreme hyper-Eddington ( ∼ 10 5 − 10 10 , see Chen et al. 2023), which is enough to drive the most luminous FRBs till now.If the disk wind cavity can become optical thin before the efficient accretion stops (see Section 4.3), high-frequency bright bursts are generated.However, when > acc , the accretion is weak in the low-density cavity ( ∼ 10, Chen et al. 2023).At this time, we can only receive low-frequency faint bursts.The intrinsic burst properties for different accretion CO models are shown in Table 3, and the disk parameters are taken from the TQM disk model in Table 1.
Accretion and outflow of COs
We adopt descriptions of the accretion and outflow of COs in AGN disks from Chen et al. 2023.Here, we list the relative equations briefly.The mass inflow rate of the CO accretion disk at the outer boundary ( obd ) can be described based on the Bondi-Holye-Lyttleton (BHL) accretion rate (see Edgar 2004 for a review) where CO is the gas density near the CO, and rel is the relative velocity between the CO and the gas in the AGN disk.However, in the AGN disk, taking into account the influence of SMBH gravity and the finite height of the AGN disk, the modified gas inflow rate is (Kocsis et al. 2011) where Hill = ( CO /3) 1/3 CO is the Hill radius with CO being the radial location of the CO and BHL = CO / 2 rel + 2 s is BHL radius.
The outer boundary radius of the circum-CO disk can be approximated to the circularization radius ( obd ∼ cir ).Under the assumption of the angular momentum conservation, the circularization radius of infalling gas is where rel = min { BHL , Hill } is the radius of CO gravity sphere.Due to the differential rotation of the AGN disk, the captured gas exhibits varying velocity relative to the CO, which can be estimated as where K is the Keplerian velocity of the AGN disk.In the above Equation, we use the following approximation: CO ≫ rel , which is obviously true for the CO location we are interested in.Though simplified, the resulting cir approximately matches the values obtained from numerical simulations (e.g.Tanigawa et al. 2012;Li et al. 2022).
Besides capturing the nearby gas, an embedded CO can also exert a gravitational torque to repel the ambient gas, resulting in a reduction of gas density in the AGN disk annulus at CO (e.g. Ward 1997).Consequently, the gas density around the CO is set as where the reduced density and half-width of the reduced region are (Kanagawa et al. 2015(Kanagawa et al. , 2016;;Tanigawa & Tanaka 2016) and The outer region of the circum-CO accretion disk becomes selfgravity unstable if the inflow mass rate is very high, leading to the reduction in the rate because of gas gravitational fragmentation (Pan & Yang 2021b;Tagawa et al. 2022).The Toomre parameter of the circum-CO disk is CO = Ω K /Σ ∼ 2 CO ℎ 3 3 K / inflow , where CO , ℎ = CCOD / and K = √︁ CO / is the viscosity parameter, the disk height ratio and the Keplerian velocity of the circum-CO disk, respectively.The modified mass inflow rate is The values of the inflow mass rate inflow and the outer boundary radius of the circum-CO disk obd depend on Equations ( 40)-( 47).It isn't easy to express with a simple analytical formula.The circum-CO accretion disk's self-gravity unstable conditions for different SMBH mass, CO locations and specific accreting models are studied in Chen et al. (2023).In this work, the value of CO of different accreting models is shown in Table 3.For all the models we chose, the accretion of circum-CO disk is stable.
The initial mass inflow rates of COs in the AGN disk are given by Equation ( 47), which should be hyper-Eddington (Chen et al. 2023).Photons are trapped for the extremely hyper-Eddington inflow and the trapping radius is tr = 3 ℎ (Kocsis et al. 2011).The mass inflow rate is reduced inside the trapping radius due to the efficient outflow driven by the disk radiation pressure.In summary, the radiusdependent mass inflow rate is (Blandford & Begelman 1999) where is the power-law index.The numerical simulations of Yang et al. (2014) show that ∼ 0.4 − 1, but the smaller value is also possible (Kitaki et al. 2021).
The outflow of the circum-CO disk driven by radiation pressure can take away a fraction of the viscous heating.The luminosity of disk outflow is given by (Chen et al. 2023) where w is the fraction of the heat taken away and ≡ 2 CO / 2 is the gravitational radius of the CO.In this work, we take a constant w = 0.5 (Chen et al. 2023). out is the mass inflow rate at the outer boundary of the circum-CO disks out = min{ obd , tr }.The inner boundary of the circum-CO disk is The inner boundary for BHs is ∼ 10 .But for NSs, the strong magnetic field and the hard surface should also be considered (Takahashi et al. 2018).The circum-NS disk is truncated due to the magnetic stress of the NS magnetosphere at a radius where the typical value of is 0.5−1 (Ghosh & Lamb 1979;Chashkina et al. 2019).When the accretion flow hits the hard surface of the NS, additional energy with luminosity acc ≃ in ( in ) NS / NS is released, so the total energy injected is out + acc (Chashkina et al. 2019).The velocity of the disk wind is where the total luminosity the disk wind takes away is In our calculations, the following typical values are used: NS = 1.4 ⊙ , BH = 10 ⊙ , NS = 10 km and = 0.5.
The cavity evolution
The disk wind interacts with the disk material and forms a shocked shell, which is analogous to the evolution of the stellar-wind-driven interstellar bubbles (Castor et al. 1975;Weaver et al. 1977).At first, the wind expands freely until the mass released out out = w in the wind is comparable to the swept mass in the disk sw = 4/3( w ) 3 d (in this phase, the cavity is almost spherical).Thus, the duration of the FE phase is For ≫ FE , the shell evolution also goes through the adiabatic expansion phase and the radiative cooling phase in sequence.In both phases, the shock wave radius evolves as follows (Weaver et al. 1977): where ≈ 0.88 for the adiabatic expansion phase, but in the radiative cooling phase, swept gas collapses into a thin shell and makes ≈ 0.76 (Weaver et al. 1977).In this work, we ignore the FE and radiative cooling phases because they have less impact on the shock evolution (see Equations ( 54) and ( 55)).The shock velocity in the adiabatic expansion phase is (Weaver et al. 1977) The adiabatic expansion phase lasts until efficient accretion of CO stops.The accretion timescale can be approximated by the viscous timescale of the accretion disk (Chen et al. 2023) Similar to Section 3, the propagation of shock in the AGN disk also needs to consider the vertical and radial directions respectively.The breakout timescale is After the shock breaks out, the shock is accelerated in the vertical direction (Sakurai 1960).When the shock enters the circum-disk material, the shock evolution re-enters the adiabatic expansion phase.When the efficient accretion of CO stops, the shock transitions to the ST phase.
In the radial direction, the shock evolution enters a momentumconserving snowplow phase (see Equation 22) after the shock breaks out.The radial width of the cavity can be obtained when the shock velocity equates to the local sound speed Inspired by Equation ( 18), we describe the evolution of the cavity as in the vertical direction, and 60) and ( 61) for BH1.The model parameters are listed in the BH1 model in Table 3.The shock velocity and radius evolution are shown in the top and bottom panels, respectively.Timescales of breakout, leaving the disk boundary and effective accretion are shown in gray vertical dashed, dash-dotted and solid lines, respectively.The height expansion (blue lines) goes through the adiabatic expansion phase in the disk (phase I, ≤ bre ), the Sakurai accelerating phase (phase II, bre < ≤ c ), the adiabatic expansion phase in the circum-disk medium (phase III, c < ≤ acc ) and the ST phase (phase IV, acc < ≤ cav ) in sequence.The width expansion (red lines) goes through the adiabatic expansion phase in the disk (phase I, ≤ bre ) and sp phase (phase V, > bre ) in sequence.
in the radial direction.Equation ( 60) means that the shock leaves the disk boundary before the efficient accretion stops ( c ≤ acc ).But if c > acc , the diabatic expansion phase (the second line in Equation ( 60)) will be skipped.Solutions of Equations ( 60) and ( 61) for BH1 are shown in Figure 9.The model parameters are listed in the BH1 model in Table 3.The shock velocity and radius evolution are shown in the top and bottom panels, respectively.Timescales of breakout, leaving the disk boundary and effective accretion are shown in gray vertical dashed, dash-dotted and solid lines, respectively.The height expansion (blue lines) goes through the adiabatic expansion phase in the disk (phase I, ≤ bre ), the Sakurai accelerating phase (phase II, bre < ≤ c ), the adiabatic expansion phase in the circum-disk medium (phase III, c < ≤ acc ) and the ST phase (phase IV, acc < ≤ cav ) in sequence.The width expansion (red lines) goes through the adiabatic expansion phase in the disk (phase I, ≤ bre ) and sp phase (phase V, > bre ) in sequence.
When expansion ceases, the AGN disk material will refill the cavity.The refilled timescale is In the case of progenitor ejecta, the cavity can only be opened once.66) and ( 67) for the NS2 model.The model parameters are listed in the NS2 model in Table 3.When bre > acc , the shock evolution is in analogy with that of the SN explosion (see Figure 6).The shock velocity and radius evolution are shown in the top and bottom panels, respectively.Timescales of breakout and leaving the disk boundary are shown in gray vertical dashed and dash-dotted lines, respectively.The height expansion (blue lines) goes through the ST phase in the disk (phase I, ≤ bre ), the Sakurai accelerating phase (phase II, bre < ≤ c ) and the ST phase in the circum-disk medium (phase III, c < ≤ cav ) in sequence.The width expansion (red lines) goes through the ST phase in the disk (phase I, ≤ bre ) and sp phase (phase IV, > bre ) in sequence.
But for the case of accretion outflow, the accretion rate can be restored to hyper-Eddington after the cavity is refilled.Then, the strong disk outflow forms the cavity again and the evolution process is circular.
In Equation ( 61), we assume that the accretion timescale is much longer than the breakout timescale ( acc ≫ bre ).If bre > acc , we can treat the short-duration accretion as an injection with the energy w ∼ w acc .The radius of the shock shell expands as (Ostriker & McKee 1988) and the shock velocity is In this case, the breakout timescale can be recalculated by Equation ( 63) The shock evolution equation for bre > acc is in the vertical direction, and in the radial direction.Solutions of Equations ( 66) and ( 67) for the NS2 model are shown in Figure 10.The model parameters are listed in the NS2 model in Table 3.When bre > acc , the shock evolution is in analogy with that of the SN explosion (see Figure 6).The shock velocity and radius evolution are shown in the top and bottom panels, respectively.Timescales of breakout and leaving the disk boundary are shown in gray vertical dashed and dash-dotted lines, respectively.The height expansion (blue lines) goes through the ST phase in the disk (phase I, ≤ bre ), the Sakurai accelerating phase (phase II, bre < ≤ c ) and the ST phase in the circum-disk medium (phase III, c < ≤ cav ) in sequence.The width expansion (red lines) goes through the ST phase in the disk (phase I, ≤ bre ) and sp phase (phase IV, > bre ) in sequence.
For the SG model, weaker outflows and a shorter cavity formation timescale are expected because the sound speed is higher.For a accreting BH in the SG disk with = 4 × 10 6 ⊙ and 0 = 1 pc (model a in Table 1), the wind luminosity is w = 10 40 erg s −1 and the efficient accretion timescale is acc ≈ 0.15 yr.From Equation (65), the breakout timescale is bre ≈ 3600 yr.However, before the breakout happens, the shock is decelerated to the local speed of sound.Let s = sh in Equation ( 64), the cavity formation timescale is cav ≈ 2630 yr.This means that for the SG model, the shock is less likely to break out.In the following section, we choose the TGM model to discuss the evolution of the cavity.The evolution timescales of cavities in the AGN disk for different accretion CO models are listed in Table 3.The disk parameters are taken from the TQM disk model in Table 1.
DM and RM variations
In this section, we investigate the DM and RM from the disk wind in the cavity and the shock shell.The mass released from the disk wind is Before the shock breakouts, the difference in expansion between the radial and vertical directions can be ignored, so the cavity can be regarded as spherical.However, after this, because the density of gas in the AGN disk decreases rapidly in the vertical direction, the vertical propagation of the shock becomes easier.At this point, the shape of the cavity deviates from a spherical shape.For simplicity, we assume that the cavity is always a cylinder during expansion.When bre < acc , the DM from disk wind in the cavity can be estimated by The shock thickness 1 − = 0.86 is taken from Weaver et al. 1977.
The free-free absorption optical depth from disk wind is For the shocked region, gas is fully ionized because of the high temperature (see Equation ( 30)).The DM from the shock shell is The free-free absorption optical depth from shock shell is In this work, we assume that only the shocked region is magnetized, and the magnetic field in the shocked shell is given in Equation ( 33).The RM from the shocked shell is For the case bre > acc , the DM from the disk wind in the cavity is and the free-free absorption optical depth from disk wind is The peak frequency of FRBs are calculated for Γ q ∼ 100 and q ∼ 10 −3 (Sridhar et al. 2021).
t\U 60) and ( 61).Timescales of the breakout, leaving the disk boundary and effective accretion are shown in gray vertical dashed, dash-dotted and solid lines, respectively.For BH1, the cavity and the shocked shell are opaque for FRBs with = 1 GHz for a few decades (see the black dashed line for ff = 1).After the shock breakouts, the DM from the shocked shell and the free-free absorption become neglected compared to the disk wind.Bottom panels: The time-dependent results are based on numerical solutions of Equations ( 66) and ( 67).Timescales of the breakout and leaving the disk boundary are shown in gray vertical dashed and dash-dotted lines, respectively.For NS2, the cavity and the shocked shell are opaque for FRBs with = 1 GHz for a few decades (see the black dashed line for ff = 1).
The results when acc < ≤ bre do not satisfy the approximation expression given by Equations ( 76) -(78).For NS2, the cavity radius is close to when ∼ acc , then it is no longer possible to assume that the density is constant as in the previous discussion (see Equation ( 1)).Thus, we only show the approximate expression when > c for the shocked shell.
The same as the case bre < acc , we can only give the time evolution scaling law for the phases I and III, which is given in Equations ( 74) and ( 75).
The DM, free-free absorption optical depth and RM from the shocked shell evolve as and The DM, free-free absorption optical depth and RM evolution of accretion COs model for BH1 (top panel, based on numerical solutions of Equations ( 60) and ( 61)) and NS2 (bottom panel, based on numerical solutions of Equations ( 66) and ( 67)) are shown in Figure 11.The model parameters are listed in Tabel 3. The contributions from the unshocked cavity and the shocked shell are shown in blue and red lines, respectively.Same as the young magnetar models, DM ff and RM show non-power-law evolution patterns over time in the Sakurai accelerating phase, which is different from other environments, e.g., SNR (Yang & Zhang 2017;Piro & Gaensler 2018), compact binary merger remnant (Zhao et al. 2021), PWN/MWN (Margalit & Metzger 2018;Yang & Dai 2019;Zhao & Wang 2021).For BH1, timescales of the breakout, leaving the disk boundary and effective accretion are shown in gray vertical dashed, dash-dotted and solid lines, respectively.The cavity and the shocked shell are opaque for FRBs with = 1 GHz for a few decades (see the black dashed line for ff = 1).After the shock breakouts, the DM from the shocked shell and the free-free absorption become neglected compared to the disk wind.For NS2, the time-dependent results are based on numerical solutions of Equations ( 66) and (67).The cavity and the shocked shell are opaque for FRBs with = 1 GHz for a few decades (see the black dashed line for ff = 1).The results when acc < ≤ bre do not satisfy the approximation expression given by Equations ( 76) -(78).For NS2, the cavity radius is close to when ∼ acc , then it is no longer possible to assume that the density is constant as in the previous discussion (see Equation (1)).Thus, we only show the approximate expression when > c for the shocked shell.
The total DM, ff and RM (including the contributions of the cavity and the shocked shell) from different accretion CO models are shown in Figure 12.The model parameters are listed in Tabel 3.For the model BH1, BH2, BH3, BH4 and NS1, the effective accretion timescale is much longer than the shock breakout timescale, whose cavity evolution is governed by Equations ( 60) and (61).However, for NS2, the duration of effective accretion is very short compared to the shock breakout timescale, whose cavity evolution is governed by Equations ( 66) and (67).The condition under which FRBs with = 1 GHz can be observed ( ff = 1) is represented by the gray horizontal dashed line.For = 4 × 10 6 ⊙ (shown in blue lines), the cavity formation timescale is about tens of thousands of years.For the model of accreting BHs (BH1 and BH2), the effective accretion lasts about a few thousand years.When c < ≤ acc , the DM sh decrease as ∝ 2(1−) 5− but DM w increase as ∝ 1/3 .Thus, the total DM (DM = DM sh +DM w ∼ 10 2 −10 3 pc cm −2 ) shows a relatively stable evolutionary trend at this phase.When > acc , the DM is dominated by the disk wind in the cavity and decreases as ∝ −2/3 .For NS1, the effective accretion only lasts about a few hundred years.Owing to less disk outflow materials accumulating in the cavity, the DM and ff mainly come from the shocked shell.After the shock breakouts, the shocked shell becomes transparent.The DM (DM ≃ DM sh ) evolves as ∝ for acc < ≤ cav .For = 10 8 ⊙ ( shown in red lines), the cavity formation timescale is about several thousands of years.For the model of accreting BHs (BH3 and BH4), the DM, ff and RM profiles are similar to the case of = 4 × 10 6 ⊙ , but the value is greater.For NS2, the DM and ff are dominated by the shocked shell (see Figure 11).For the accreting CO models, the RM is large and decreases with time, which is similar to the RM of FRB 20121102A (Michilli et al. 2018;Hilmarsson et al. 2021b).
DISCUSSION
In addition to the DM, absorption, Faraday rotation-conversion and burst luminosity discussed above, other possible observation properties should also be studied and tested in the future in the frame of the AGN disk.
The distribution of COs in AGN disk
Both magnetars and accreting COs are expected to be embedded in AGN disks.The distribution of FRB sources in AGN disk depends on the formations and migration of COs.COs can be formed via the core collapse of massive stars or AIC/compact binary merger.The outer disk is self-gravity unstable and is ongoing effective star formation.Thus, the SNe formation channel mainly occurs in the outer disk.The AIC of NSs preferentially occurs in the outer disk, while the binary NS merger preferentially occurs at ≲ 10 −3 pc from the SMBH (Perna et al. 2021).The migration of COs in the disk is subject to large uncertainty (e.g.Bellovary et al. 2016;Pan & Yang 2021a;Grishin et al. 2023).Migration traps caused by gas torque are thought to occur at tens to hundreds of times the gravitational radius (Bellovary et al. 2016).However, migration traps can also exist at larger distances (∼ 10 3−5 g ) if thermal torques are taken into consideration (Grishin et al. 2023).
A detailed study of CO distribution is beyond the scope of this article.Although the radial distribution of COs is uncertain, FRBs can be generated across the entire disk.If the FRB sources are in the inner disk, we can only detect signal sites at a few scale heights because of the absorption of the dense gas.However, most COs locate close to the midplane of the disk (Perna et al. 2021).The feedback (progenitors' ejecta or accreting outflows) of magnetars or accreting COs should be considered for emission from the midplane (see Sections 3 and 4).The local sound speed for the inner disk is too high for the shock to break out.However, the feedback cavity in the outer disk can reduce the absorption by the dense disk materials, making FRBs detectable.
Event rate
The event rate in AGN disks can be estimated as (Tagawa et al. 2020) 60) and ( 61).However, for NS2, the duration of effective accretion is very short compared to the shock breakout timescale, whose cavity evolution is governed by Equations ( 66) and (67).The condition under which FRBs with = 1 GHz can be observed ( ff = 1) is represented by the gray horizontal dashed line.For = 4 × 10 6 ⊙ (shown in blue lines), the cavity formation timescale is about tens of thousands of years.
For the model of accreting BHs (BH1 and BH2), the effective accretion lasts about a few thousand years.When c < ≤ acc , the DM sh decrease as ∝ 2(1−) 5− but DM w increase as ∝ 1/3 .Thus, the total DM (DM = DM sh + DM w ∼ 10 2 − 10 3 pc cm −2 ) shows a relatively stable evolutionary trend at this phase.When > acc , the DM is dominated by the disk wind in the cavity and decreases as ∝ −2/3 .For NS1, the effective accretion only lasts about a few hundred years.
Owing to less disk outflow materials accumulating in the cavity, the DM and ff mainly come from the shocked shell.After the shock breakouts, the shocked shell becomes transparent.The DM (DM ≃ DM sh ) evolves as ∝ 5− for acc < ≤ cav .For = 10 8 ⊙ (shown in red lines), the cavity formation timescale is about several thousands of years.For the model of accreting BHs (BH3 and BH4), the DM, ff and RM profiles are similar to the case of = 4 × 10 6 ⊙ , but the value is greater.For NS2, the DM and ff are dominated by the shocked shell (see Figure 11).For the accreting CO models, the RM is large and decreases with time, which is similar to the RM of FRB 20121102A (Michilli et al. 2018;Hilmarsson et al. 2021b).
The AGN disk is also found to be inhomogeneous.Sonic-scale magneto-rotational and gravitational instabilities would commonly occur in AGN disks (e.g.Balbus & Hawley 1998;Gammie 2001;Goodman 2003;Chen & Lin 2023), both can excite inhomogeneous turbulence with locally chaotic eddies, leading to stochastic variations on DMs and RMs of FRBs.
RM reversals
Recently, RM reversals have been reported for some FRBs, such as FRB 20201124A (Xu et al. 2022), FRB 20190520B (Anna-Thomas et al. 2023) and FRB 20180301A (Kumar et al. 2023).Possible explanations of magnetic field reversals are that the FRB source is in a massive binary system (Wang et al. 2022;Zhao et al. 2023) or a magnetized turbulent environment (Anna-Thomas et al. 2023).
The large-scale toroidal magnetic fields of a magnetized accretion disk reverse with the magnetorotational instability (MRI) dynamo cycles (Salvesen et al. 2016).For the FRB sources in the AGN disks, the magnetic field reversal can be explained naturally.
CONCLUSIONS
In this work, we investigate the observational properties of FRBs to occur in the disks of AGNs.Two mainstream types of radiation mechanism models are considered, such as close-in models from the magnetosphere of magnetars and far-away models from relativistic outflows from accreting COs.Progenitors' ejecta or accretion disks' outflows interact with the disk material to form a cavity.The cavity makes the FRB easier to escape.The propagation of FRBs in the disk or cavity causes dispersion, free-free absorption, Faraday rotation and Faraday conversion.Our conclusions are summarized as follows.
• If the feedback of the ejecta or outflows is weak and there is no extra ionization source, whether the FRB is absorbed or not depends only on the optical depth of the disk.For an SG disk, the disk becomes optically thin at > 10 4 − 10 5 g .For a TQM disk, the disk becomes optical thin at the distance ∼ 10 3 − 10 4 .
• For the inner disk or the outer disk ionized by the radiation or shocks, FRBs can be observed only from sources located at a few scale heights.The largest DM and RM contributed by the disk is about 10 3 − 10 5 pc cm −2 and 10 4 − 10 5 rad m −2 , respectively.
• If the magnetic field in the AGN disks is toroidal field dominated, FC occurs.For the intrinsic 100% linearly polarized radiation, FC converts linear polarization into circular polarization when radiation passes through the disk.
• For close-in models of young magnetar born in SN explosions or the merger of two compact stars, the AGN disk environments do not affect the radiation properties.For accreting-powered models, the burst luminosity depends on the accretion rate.The event rate of magnetar-origin FRBs in AGN disks is ∼ 5 − 30 Gpc −3 yr −1 .For accreting CO models, the FRB rate in AGN disks is ∼ 3.5 − 35 Gpc −3 yr −1 .The hyper-Eddington accretion of COs in AGN disks makes FRB brighter, accounting for about 0.1% − 1% of total bright repeating FRBs.
• Shock is generated during the interaction between the ejecta or outflows and the AGN disk materials.The shock is quenched by dense disk materials in the radial direction, but can break out in the vertical direction.The DM and RM during shock breakout show a non-power law evolution pattern over time, which is completely different from other environments (such as supernova remnants).
• The gas in AGN disk is inhomogeneous and turbulent, resulting in stochastic DM and RM variations.The multipath propagation in turbulent magnetized plasma also causes frequency-dependent depolarization (Bochenek et al. 2020;Yang et al. 2022;Lu et al. 2023).
• The large-scale toroidal magnetic fields of a magnetized accretion disk reverse with the magnetorotational instability (MRI) dynamo cycles (Salvesen et al. 2016).For the FRB sources in the AGN disks, the magnetic field reversal can be explained naturally.
Figure 1 .
Figure 1.Schematic diagram of the generation and propagation of FRBs in AGN disk.(a)If the feedback of the source is weak, FRBs only can be detected from source sites at a few scale heights due to the absorption from the dense disk materials.If the feedback of the source is strong, a shock is formed during the interaction between the ejecta/outflows and the disk materials.The shock punches a cavity in the AGN disk.When ≤ bre , the shock shell is almost spherical.(b) After the shock breaks out, the vertical propagation of the shock becomes easier because the density of gas in the AGN disk decreases rapidly.At this point, the shape of the cavity becomes ringlike.(c) When the shock decelerates to the speed of the local sound, the shock evolution ends.Finally, the cavity is refilled by the AGN disk material.In the case of progenitor ejecta, the cavity can only be opened once.But for the case of accretion outflows, the accretion rate can be restored to hyper-Eddington after the cavity is refilled.Then, the strong disk outflow forms the cavity again and the evolution process is periodic.(d) The 'close-in' models.FRB originates from the magnetosphere of magnetars.The AGN disk environment only affects the propagation effect but not the radiation properties for magnetospheric origins.(e) The 'far-away' models.FRB originates from relativistic outflows of accreting COs.In addition to the propagation effect, high accretion rates in the AGN disk lead to more bright bursts for accreting-powered origins.
Figure 8 .
Figure8.The DM, free-free absorption optical depth and RM evolution for different magnetar formation channels.The parameters and the evolution timescale of each model are listed in Table3.2.Solid and dashed lines represent the case of SG and TQM disk models, respectively. ff,sh = 1 are shown in gray dashed horizontal lines.For SG-NSWD and SG-BNS models with = 10 8 ⊙ , the optical depth is still greater than unity when the cavity stops expanding.In other cases, the cavity becomes transparent sometime after the magnetar is born.For binary mergers or AIC progenitors, this time is hundreds to thousands of years for the SG disk model, while it only takes a few decades for the TQM disk model.For massive star progenitors, the time become transport is about a thousand years for the SG disk model, while it is shortened to a few hundred years for the TQM disk model.
WFigure 9 .
Figure 9. Solutions of Equations (60) and (61) for BH1.The model parameters are listed in the BH1 model in Table3.The shock velocity and radius evolution are shown in the top and bottom panels, respectively.Timescales of breakout, leaving the disk boundary and effective accretion are shown in gray vertical dashed, dash-dotted and solid lines, respectively.The height expansion (blue lines) goes through the adiabatic expansion phase in the disk (phase I, ≤ bre ), the Sakurai accelerating phase (phase II, bre < ≤ c ), the adiabatic expansion phase in the circum-disk medium (phase III, c < ≤ acc ) and the ST phase (phase IV, acc < ≤ cav ) in sequence.The width expansion (red lines) goes through the adiabatic expansion phase in the disk (phase I, ≤ bre ) and sp phase (phase V, > bre ) in sequence.
Figure 10 .
Figure 10.Solutions of Equations (66) and (67) for the NS2 model.The model parameters are listed in the NS2 model in Table3.When bre > acc , the shock evolution is in analogy with that of the SN explosion (see Figure6).The shock velocity and radius evolution are shown in the top and bottom panels, respectively.Timescales of breakout and leaving the disk boundary are shown in gray vertical dashed and dash-dotted lines, respectively.The height expansion (blue lines) goes through the ST phase in the disk (phase I, ≤ bre ), the Sakurai accelerating phase (phase II, bre < ≤ c ) and the ST phase in the circum-disk medium (phase III, c < ≤ cav ) in sequence.The width expansion (red lines) goes through the ST phase in the disk (phase I, ≤ bre ) and sp phase (phase IV, > bre ) in sequence.
Figure 11 .
Figure 11.The DM, free-free absorption optical depth and RM evolution of accretion COs model for BH1 (top panel) and NS2 (bottom panel).The model parameters are listed in Tabel 3. The contributions from the unshocked cavity and the shocked shell are shown in blue and red lines, respectively.Same as the young magnetar models, DM ff and RM show non-power-law evolution patterns over time in the Sakurai accelerating phase, which is different from other environments, e.g., SNR (Yang & Zhang 2017; Piro & Gaensler 2018), compact binary merger remnant (Zhao et al. 2021), PWN/MWN (Margalit & Metzger 2018; Yang & Dai 2019; Zhao & Wang 2021).Top panels: The time-dependent results are based on numerical solutions of Equations (60) and (61).Timescales of the breakout, leaving the disk boundary and effective accretion are shown in gray vertical dashed, dash-dotted and solid lines, respectively.For BH1, the cavity and the shocked shell are opaque for FRBs with = 1 GHz for a few decades (see the black dashed line for ff = 1).After the shock breakouts, the DM from the shocked shell and the free-free absorption become neglected compared to the disk wind.Bottom panels: The time-dependent results are based on numerical solutions of Equations (66) and (67).Timescales of the breakout and leaving the disk boundary are shown in gray vertical dashed and dash-dotted lines, respectively.For NS2, the cavity and the shocked shell are opaque for FRBs with = 1 GHz for a few decades (see the black dashed line for ff = 1).The results when acc < ≤ bre do not satisfy the approximation expression given by Equations (76) -(78).For NS2, the cavity radius is close to when ∼ acc , then it is no longer possible to assume that the density is constant as in the previous discussion (seeEquation (1)).Thus, we only show the approximate expression when > c for the shocked shell.
Table 2 .
(Margalit & Metzger 2018;Yang & Dai 2019;odel.The parameters are the same as Figure6.Blue and red lines represent the contribution from the cavity and shock shell, respectively.Timescales of FE duration, breakout and leaving the disk boundary are shown in gray vertical solid, dashed, and dash-dotted lines, respectively.Except for the Sakurai accelerating phase ( bre < ≤ c ), the approximate scaling laws for the remaining phases are given.In the Sakurai accelerating phase, DM ff and RM show non-power law evolution patterns over time, which is different from other environments, e.g., SNR(Yang & Zhang 2017;Piro & Gaensler 2018), compact binary merger remnant(Zhao et al. 2021), PWN/MWN(Margalit & Metzger 2018;Yang & Dai 2019; Zhao & Wang 2021).Although the material in the AGN disk is very dense, as the cavity expands, it can become transparent around a thousand years after the birth of the magnetar.Evolution timescale of cavities in the AGN disk for different progenitor models Radice et al. (2018)3)t al. (2013);Radice et al. (2018) 10 6 M (Margalit & Metzger 2018;Yang & Dai 2019;tion from the cavity and shock shell, respectively.Timescales of FE duration, breakout and leaving the disk boundary are shown in gray vertical solid, dashed, and dashdotted lines, respectively.Except for the Sakurai accelerating phase ( bre < ≤ c ), the approximate scaling laws for the remaining phases are given.In the Sakurai accelerating phase, DM ff and RM show non-power-law evolution patterns over time, which is different from other environments, e.g., SNR(Yang & Zhang 2017;Piro & Gaensler 2018), compact binary merger remnant(Zhao et al. 2021), PWN/MWN(Margalit & Metzger 2018;Yang & Dai 2019; Zhao & Wang 2021
Table 3 .
The intrinsic burst properties and evolution timescales of cavities in the AGN disk for different accretion CO models The total DM, ff and RM (including the contributions of the cavity and the shocked shell) from different accretion CO models.The model parameters are listed in Tabel 3.For the model BH1, BH2, BH3, BH4 and NS1, the effective accretion timescale is much longer than the shock breakout timescale, whose cavity evolution is governed by Equations ( (Tagawa et al. 2020;Perna et al. 2021) in AGN disks, AGN is the average lifetime of AGN disks and FRB is the fraction of CO that can produce FRB.Using the fitting results of AGN number density AGN / fromBartos et al. (2017), the event rate in AGN disk is given by(Tagawa et al. 2020;Perna et al. 2021)t\U | 16,345 | 2024-03-05T00:00:00.000 | [
"Physics"
] |
Man-in-the-middle-attack: Understanding in simple words
,
Introduction
In cryptography and PC security, a man-in-the-middle attack (MITM) is an attack where the attacker furtively transfers and perhaps changes the correspondence between two parties who trust they are straightforwardly communicating with each other. A man in the middle (MITM) attack is a general term for when a culprit positions himself in a discussion between a client and an application; either to listen stealthily or to imitate one of the parties, making it show up as though an ordinary trade of information is in progress (Meyer & Wetzel, 2004;Kish, 2006;Hypponen & Haataja, 2007;Ouafi et al. 2008). The objective of an attack is to take individual information, for example, login certifications, account points of interest and charge card numbers. Targets are normally the clients of financial applications, SaaS businesses, web-based business locales and other sites where logging in is required. Information obtained during an attack could be utilized for many, purposes, including fraud, unapproved support exchanges or an unlawful watchword change. Furthermore, it can be utilized to gain a decent footing inside an anchored edge during the infiltration phase of an Advanced Persistent Threat (APT) strike. Fig. 1 portrays a schematic of 'men-in-the-middle-attack' belief system. A man-in-the-middle attack allows a malicious actor to intercept, send and receive data meant for someone else, or not meant to be sent at all, without either outside party knowing until it is too late. Man-in-the-middle attacks can be abbreviated in many ways, including MITM, MitM, MiM or MIM (Ouafi et al., 2008;Joshi et al., 2009;Khader & Lai, 2016 ;Tung et al., 2016;Wallace & Miller, 2017;.
Fig. 1. Men-in-the-middle attack ideology schematic
One case of man-in-the-middle attacks is dynamic eavesdropping, in which the attacker makes independent associations with the victims and transfers messages between them to influence them to trust they are talking straightforwardly to each other over a private association when in certainty the whole discussion is controlled by the attacker. The attacker must have the capacity to intercept every single significant message passing between the two casualties and inject new ones. This is direct in many conditions; for instance, an attacker within gathering scope of an unencrypted wireless access point (Wi-Fi) could insert himself as a man-in-the-middle (Callegati et al., 2009;Desmedt, 2011). As an attack that goes for circumventing common authentication, or scarcity in that department, a man-in-the-middle attack can succeed just when the attacker can mimic every endpoint agreeable to them not surprisingly from the genuine closures. Comprehensively speaking, a MITM attack is what might as well be called a mailman opening your bank proclamation, writing down your record points of interest and after that resealing the envelope and delivering it to your entryway. Most cryptographic conventions include some type of endpoint authentication particularly to persist MITM attacks. For instance, TLS can authenticate one or the two parties using a commonly confided in endorsement expert (Sounthiraraj et al., 2014;Khader & Lai, 2015;Rahim, 2017).
Literature review
MITM is named for a ball game where two people play catch while a third person in the middle attempts to intercept the ball. MITM is also known as a fire brigade attack, a term derived from the emergency process of passing water buckets to put out a fire. In the year 2004, U. Meyer and S. Wetzel presented a report on Universal Mobile Telecommunication System's (UITM) security protocol where they discussed about 'men-in-the-middle-attack' on mobile communication (Meyer & Wetzel, 2004). In 2006, Kish published his research in a master listed journal where he showed an encryption method of MITM using Kirchhoff-loop-Johnson (-like)-noise cipher (Kish, 2006). Hypponen and Haataja (2007), made a research on secure Bluetooth communication and showed their developed system was capable of preventing MITM attack (Hypponen & Haataja, 2007). Sun et al., 2018 andSaif et al., 2018; made similar type of researches on updated version of Bluetooth networks security and discussed about new techniques to prevent MITM in two party's communication (Sun et al., 2018;Saif et al., 2018). Ouafi et al. (2008), Callegati et al. (2009), Joshi et al., (2009), Desmedt, (2011 and Sounthiraraj et al., (2014) conducted researches about HTTP security and those researches found MITM as a very serious threat and those also discussed about the prevention techniques (Ouafi et al., 2008;Callegati et al., 2009;Joshi et al., 2009;Desmedt, 2011;Sounthiraraj et al., 2014). Khader et al. (2015) and Tung et al. (2016) published their researches which mostly talks about different prevention methods of MITM (Khader & Lai, 2015;Tung et al., 2016). Wallace and Miller (2017) patented their research about endpoint based MITM where they tested multiple prevention methods for MITM (Wallace & Miller, 2017). ; did a survey on MITM and its effects on the economy. Li et al. (2017), Rahim (2017) and Howell et al. (2018) made identical researches on prevention of MITM mainly for internet communication and those papers discusses several unique and effective measures on prevention of MITM from on-net communication Rahim, 2017;Howell et al., 2018). Fei et al. (2017), Usman et al. (2018), Valluri (2018) and Kuo et al. (2018) published their review reports on MITM which mostly discusses about WLAN security for 2-way communication.
Progression of 'man-in-the-middle-attack'
Effective MITM execution has two distinct stages: interception and decryption; which involves being within physical closeness to the intended target, and another that exclusive involves malware, known as a man-in-the-browser (MITB) attack. With a conventional MITM attack, the attacker needs access to an unsecured, or ineffectively anchored Wi-Fi switch Rahim, 2017;Fei et al., 2018;Howell et al., 2018;Sun et al., 2018). These sorts of associations are by and large found out in the open territories with free Wi-Fi hotspots, and even in a few people's homes. An attacker will check the switch using code looking for particular shortcomings, for example, default or poor secret key utilize, or security gaps because of the poor arrangement of the switch. Once the attacker has discovered the powerlessness, they will then insert their instruments in the middle of the clients' PC and the sites the client visits. A fresher variation of this attack has been gaining fame with cybercriminals because of its simplicity of execution. With a man-in-the-browser attack, every one of an attacker needs are an approach to inject malware into the PC, which will then install itself into the browser without the clients' learning and will then record the information that is being sent between the victim and particular focused on sites, for example, financial institutions, that are coded into the malware. Once the malware has gathered the particular information it was modified to gather, it then transmits that information back to the attacker.
Interceoption
The initial step intercepts client activity through the attacker's system before it achieves its intended destination. The most well-known (and easiest) method for doing this is an inactive attack in which an attacker makes free/open wifi hotspots; accessible to general society. Commonly named in a way that relates to their area, they aren't watchword secured. Once a casualty interfaces with such a hotspot, the attacker gains full permeability to any online information trade. Attackers wishing to adopt a more dynamic strategy to interception may dispatch one of the following attacks: • IP spoofing involves an attacker disguising himself as an application by altering parcel headers in an ip address. Accordingly, clients attempting to get to a url associated with the application are sent to the attacker's site ('man in the middle (mitm) attack' (incapsula co.), 2016) • ARP spoofing is the way toward linking an attacker's mac address with the ip address of a legitimate user on a local area network using fake arp messages. Subsequently, information sent by the client to the host ip deliver is instead transmitted to the attacker (Meyer & Wetzel, 2004;Kish, 2006;Hypponen & Haataja, 2007;Ouafi et al., 2008;Callegati et al., 2009;Joshi et al., 2009;Desmedt, 2011) • DNS spoofing, otherwise called DNS store poisoning, involves infiltrating a DNS server and altering a site's address record. Accordingly, clients attempting to get to the site are sent by the adjusted dns record to the attacker's site (Ouafi et al., 2008;Joshi et al., 2009;Khader et al., 2015;Howell et al., 2018;Sun et al., 2018;Usman et al., 2018;Valluri, 2018;Kuo et al., 2018;Saif et al., 2018; 'man in the middle (mitm) attack' (incapsula co.)).
Decryption
After an interception, any two-way SSL movement should be unscrambled without alerting the client or application. Various strategies exist to accomplish this: • HTTPS spoofing sends an imposter endorsement to the victim's browser once the initial association demand for a safe site is made ('Man-in-the-middle attack' (Wikipedia)). It holds an advanced thumbprint related with the bargained application, which the browser confirms according to an existing rundown of confided in destinations. The attacker is then ready to get to any information entered by the casualty before it's passed to the application.
• SSL BEAST (browser abuse against SSL/TLS) focuses on a TLS variant 1.0 helplessness in SSL.
Here, the casualty's PC is infected with pernicious JavaScript that intercepts scrambled treats sent by a web application. Then the application's figure square chaining (CBC) is endangered in order to decode its treats and authentication tokens ('man-in-the-middle-attack-mitm' (Techpedia); "man-in-the-middleattack" (Rapid Web Ser.); 'What is a Man In The Middle attack?' (Symantec Corp.), Norton Security Blog,; 'What is UMTS?' (Tech Target Web), Blog Post) • SSL hijacking happens when an attacker passes produced authentication keys to both the client and application during a TCP handshake. This sets up what seems, by all accounts, to be a safe association when, actually, the man in the middle controls the whole session (K. Ouafi et al., 2008;Y. Desmedt, 2011; 'Man-in-the-middle attack' (Wikipedia); 'Flaw in Windows DNS client exposed millions of users to hacking' (SC Mag. UK), News Article) • SSL stripping minimizes an HTTPS association with HTTP by intercepting the TLS authentication sent from the application to the client. The attacker sends a decoded form of the application's site to the client while maintaining the anchored session with the application. In the meantime, the client's whole session is noticeable to the attacker Li et al., 2017;Rahim, 2017;Fei et al., 2018;Howell et al., 2018;Sun et al., 2018;Usman et al., 2018;Valluri, 2018).
MITM: What and how?
'Man-in-the-middle-attack' also known/abbreviated as MIM, MiM, MitM or MITMA is a type of cryptographic attack over a communication channel by a malicious third party where he/she takes over a confidential/personal communication channel between two or legitimate communicative points or parties. In this cyber attack, the attacker can control (read, modify, intercept, change or replace) the communication traffic between victims. But by using MITM protocol the unauthenticated attacker leaves no clues/traces of his interception of this cybercrime, in short words the attacker remains invisible to the victims.
It needs a communication channel to make a MITM attack. The most used communication channels of MITM attack are namely GSM, UMTS, Long-Term Evolution (LTE), Bluetooth, Near Field Communication (NFC), Radio Frequency and Wi-Fi. The first recorded MITM attack was planned in the time of WW-II for intercepting German Military's radio communication and was done by the Royal British Intelligence (also known as MI-6) (Kozaczuk, 1984). In normal sense, there are three most possible compromises, namely Confidentiality, Integrity, and Availability; which is aimed at my MITM attack. Most of the MITM attacks now days are done in social media, because of the extensive use of human communication are done using social media (Facebook, Twitter, Yahoo Messenger and etc. (Hudaib, 2014) Decoding a MITM attack is a long process, basically this is done using three ways, namely 1) Based on impersonation methods of cyber decoding, 2) Based on Telecommunication addressing techniques and lastly 3) Based on GPS locating method of attacker and victims both .
Present status of MITM attacks
Nowadays, most of the MITM attacks are performed using communication layers. Open System Intercommunication (OSI) and GSM networks are the most affected communication channels by MITM attacks. Table-1 shows types of MITM attacks on different OSI and Cellular service networks ('Man-inthe-middle attack' (Wikipedia); 'man-middle-attack' (CA Tech); 'man-in-the-middle-attack-mitm' (Techpedia); "man-in-the-middle-attack" (Rapid Web Ser.); 'What is a Man In The Middle attack?' (Symantec Corp.), Norton Security Blog); 'What is UMTS?' (Tech Target Web), Blog Post; 'Flaw in Windows DNS client exposed millions of users to hacking' (SC Mag. UK), News Article; Fatima, 2011;Kozaczuk, 1984;Hudaib, 2014;. Table 1, we list MITM attacks across OSI layers and cellular networks. Each layer enforces different approaches to provide security. Nevertheless, neither of them is free from MITM attacks. Ornaghi et al. 2003, at a European conference, was the first to present a security system-based tracking location of the attacker and victim. He classified MITM attacks in three distinct categories: a) LAN (Local Area Network) tracking, b) LAN to Remote Network tracking and c) Remote Network track. The authors also take into consideration that STP mangling is a closed type of MITM as the attacker can only manage to decode the unmanaged traffic between two clients.
Spoofing: Most common MITM
Spoofing an impersonation technique which is originated from 'spying'. In the middle century, European spies used to hear secret conversation by impersonating him/her to the communicative party. The same method is applied in modern cryptographic spoofing, as the attacker intercepts a confidential/personal communication between two hosts and controls over transferring data, while the hosts are not being aware of the unauthenticated attacker. Some research papers ('Flaw in Windows DNS client exposed millions of users to hacking' (SC Mag. UK), News Article; 'What is UMTS?' (Tech Target Saif et al., 2018;Kuo et al., 2018;Valluri, 2018;Usman et al., 2018;Senie & Ferguson, 1998;Humphreys et al., 2008;Scott, 2001;Schuckers, 2002) describe spoofing as the first step of executing MITM, not being the total of a MITM attack; while some other deliciated research papers claim spoofing as a whole MITM process. In this paper, we will consider it as a spoofing based MITM or spoofing attack. When a party wants to communicate with other parties over a cryptographic network then if their network is same with an unknown MAC address then the server broadcasts an address resolution protocol (also abbreviated as ARP) request to all hosts under the same network connection. The client with the announced Internet Protocol is only expected to make a reply including his/her MAC (Media Access Control) address. However, when ARP cache is managed in a dynamic mode, cache entries can be easily fabricated by forged ARP messages, since proper authentication mechanism is missing . In the meantime, the communicating medium saves the IP to MAC entry in its local cache, so the next time communication can be speeded up, by avoiding the broadcasts.
3)When 'A' wants to send a message to 'B', it will go to 'X''s MAC address EE:EE:EE:EE:EE:X3, instead of 'B''s BB:BB:BB:BB:BB:X2. 4) When 'B' wants to send a message to 'A', it will also go to 'X'.
Schematic regarding the example stated above is given in Fig. 2.
Fig. 2. Spoofing method between two clients
There are many well-researched works of literature where spoofing defending system is discussed. Among them T. Demuth et al., 2005, D. Pansa et al., 2008, Z. Trabelsi et al., 2007and R. Philip et al., 2007 are mostly considerable (D. Pansa and T. Chomsiri, 2008;T. Demuth and A. Leitner, 2005; Z. Trabelsi and W. El-Hajj, 2007;R. Philip et al., 2007). They introduced various well-researched techniques to prevent spoofing and make secure communication over LAN. But those Literature doesn't concern about wireless methods of communications. Table 2 below shows a typical comparison between spoofing prevention techniques:
MITM on GSM: A threat to phone communication security
In the early 90's, the European Telecommunications Standards Institute introduced GSM as a second generation (2G) telecommunication standard. Today, according to the mobility report (SAMSUNG ELECTRONICS SUSTAINABILITY REPORT), GSM covers more than 90% of the world population. There are two basic types of services offered through GSM: telephony and data bearer. The GSM architecture consists of Mobile Stations (MSs) and Base Terminal Stations (BTS), which communicate with each other through radio links. Each BTS connects to the Base Station Controller (BSC). BSC links to the Mobile Switching Center (MSC), which is responsible for routing signals to and from fixed networks (Z. . Home Location Register (HLR) and the Visitor Location Register (VLR) are the two major databases for each mobile service provider in the GSM architecture. Fig. 3 shows a schematic of GSM architecture. Each of GSM subscribers has the secret key, which is stored in the Subscriber Identity Module (SIM) card of the MS. The Authentication Center (AUC) has a secret key, which is shared with the subscriber and AUC. AUC generates a set of security parameters for execution of encryption and authentication. Fig. 3. GSM Architecture (Kurose, 2005) The main idea behind the attack is to impersonate same mobile network code as the legitimate GSM network to false BTS (or IMSI Cather (Hardin, 2018)) and convince the victim that this station is the valid one. Let us consider the next example: network consists of the Legitimate MS, Legitimate BTS, False BTS, and False MS. Attacker's network is a combination of the False BTS and False MS. While in standby mode the MS connects to the best received BTS. Therefore, False BTS should be more powerful than the original one, or closer to the target. If the victim is already connected, then the attacker requires to drawn any present real stations. The algorithm of the FBS-based MITM attack on GSM is the following: 1) Attacker sets-up connection between False BTS and Legitimate MS.
2) False MS impersonates the victim's MS to the real network by resending the identity information, which was received from the step 1.
3) Victim's MS sends its authentication information and cipher-suites to the False BTS.
4) Attacker forwards message from step 3 to the Legitimate BTS, with changed authentication abilities of the MS to do not support encryption (A5/0 algorithm ), or to weak encryption algorithm (e.g., A5/2). Finally, the authentication is finished. All following messages between the victim and real network are going through attacker's entities, with encryption specified by an attacker, or no encryption at all. This manipulation is possible since GSM does not provide the data integrity (Chen et al., 2007), as a result, the attacker can catch, modify, and resend messages. At the designing phase of the GSM protocol, FBS seemed impractical due to costly required equipment, but currently, this kind of attack is completely applicable since costs decreased (Feher et al., 2018). Paik et. al. (2010); besides describing GSM security concerns, pointed out that nowadays attackers are better equipped. Among the reasons we can identify opensource projects (e.g., Open BTS (Burgess & Samra, 2008)) and low-cost hardware (e.g., Ettus Research (A. N. I. C. Ettus Research. Ettus research -the leader in software-defined radio (SDR))). In particular, an attacker can build its own false BTS for less than $1000. An algorithm of FBD based MITM attack on GSM network is given below in Fig. 5. Table 3 discusses various FBS based MITM attacks prevention approaches and different attacks with regarding references.
Statistical analysis of MITM attack
For statistical analysis of the MiM attacks, we refer to the usual finite lattice of security levels, ( , ⊑ , ⨅ , ⨆ ,⊺ , ) and based on it define : → as a mapping from names to their security levels. Now, we can define the name integrity property as follows.
Property [Name integrity]
We say that a name, , has the integrity property with respect to a environment if ∀ ∈ : ⊑ The predicate integrity , indicates that upholds the above property with respect to . A MITM attack is defined as an attack in which the intruder is capable of breaching the integrity of names of two processes.
Property [Man-in-the-Middle Attack]
A context, (a process with a hole) succeeds in launching a MiM attack on two processes, and , if the result of the abstract interpretation, | | | || proves that∃ ∈ , ∈ : , ⋁ , .
Preventing MITM
Blocking MITM attacks requires a few down to earth ventures with respect to clients, and additionally a combination of encryption and check techniques for applications. For clients, this implies: • Avoiding WiFi associations that aren't password encrypted.
• Paying consideration regarding browser warnings reporting a site as being unsecured.
• Immediately logging out of a protected application when it's not in utilize.
For site administrators, secure correspondence conventions, including TLS and HTTPS, help relieve spoofing attacks by vigorously encrypting and authenticating transmitted information (Fatima, 2011). Doing so keeps the interception of site activity and hinders the decoding of delicate information, for example, authentication tokens. It is viewed as best practice for applications to utilize SSL/TLS to anchor each page of their site and not only the pages that expect clients to sign in. Doing so helps diminishes the possibility of an attacker stealing session treats from a client browsing on an unsecured segment of a site while signed in.
Conclusion
The MITMs interrupt interchanges between two frameworks, and this phenomenon takes place when the attacker is responsible for a switch along typical point of movement. The attacker in all cases is situated on a similar communicated domain as the victim stands. Indeed, in a HTTP exchange, a TCP protocol exists among the customer and the server. The attacker divides the TCP protocol into two connectionsone between the victim and the attacker and the other between the attacker and the server. On intercepting the TCP protocol, the attacker goes about as an intermediary reading, altering and inserting information in intercepted correspondence. In an unsecured connection (e.g. HTTP protocol), the communication of two users can be hacked by an intruder without any difficulties. In a HTTPS connection, a single TCP protocol is attained by building two independent SSL connections. A MITM attack exploits the shortcoming in arrange correspondence convention, convincing the casualty to course movement through the attacker instead of ordinary switch and is by and large alluded to as ARP spoofing. This unethical phenomenon can affect a country's economy and may be a reason of instability between nations by stealing/modifying classified/secret defense sector data/information. So, this unethical phenomenon has to be prevented, and the necessary measures should be taken for ending. Although the paper did not focus on extensive analysis for future research directions of MITM, but a good understanding about MITM and the technologies for preventing MITM like Li-Fi were discussed briefly. | 5,205.2 | 2019-01-10T00:00:00.000 | [
"Computer Science"
] |
Zinc-Intercalated Halloysite Nanotubes as Potential Nanocomposite Fertilizers with Targeted Delivery of Micronutrients
This study reports on the development of nanocomposites utilizing a mineral inhibitor and a micronutrient filler. The objective was to produce a slow release fertilizer, with zinc sulfate as the filler and halloysite nanotubes as the inhibitor. The study seeks to chemically activate the intercalation of zinc into the macro-, meso-, and micropores of the halloysite nanotubes to enhance their performance. As a result, we obtained three nanocomposites in zinc sulfate solution with concentrations of 2%, 20%, and 40%, respectively, which we named Hly-7Å-Zn2, Hly-7Å-Zn20, and Hly-7Å-Zn40. We investigated the encapsulation of zinc sulfate in halloysite nanotubes using X-ray diffraction analysis, transmission electron spectroscopy, infrared spectroscopy (FTIR), and scanning electron microscopy with an energy-dispersive spectrometer. No significant changes were observed in the initial mineral parameters when exposed to a zinc solution with a concentration of 2 mol%. It was proven that zinc was weakly intercalated in the micropore space of the halloysite through the increase in its interlayer distance from 7.2 to 7.4. With an increase in the concentration of the reacted solution, the average diameter of the nanotubes increased from 96 nm to 129 nm, indicating that the macropore space of the nanotubes, also known as the “site”, was filled. The activated nanocomposites exhibit a maximum fixed content of adsorbed zinc on the nanotube surface of 1.4 wt%. The TEM images reveal an opaque appearance in the middle section of the nanotubes. S SEM images revealed strong adhesion of halloysite nanotubes to plant tissues. This property guarantees prolonged retention of the fertilizer on the plant surface and its resistance to leaching through irrigation or rainwater. Surface spraying of halloysite nanotubes offers accurate delivery of zinc to plants and prevents soil and groundwater contamination, rendering this fertilizer ecologically sound. The suggested approach of activating halloysite with a zinc solution appears to be a possible route forward, with potential for the production of tailored fertilizers in the days ahead.
Introduction
Halloysite is a unique mineral (Al 2 Si 2 O 5 (OH) 4 ) of the class of phyllosilicates and the kaolinite-serpentine group.Its distinctive feature is the morphology of the particles, which are natural nanotubes [1][2][3][4].Chemically, halloysite is like kaolinite and has a hollow tubular structure in the submicron range [5,6].Halloysite can be divided into 10Å-halloysite and 7Å-halloysite, where angstroms show the interlayer spacing of the crystal structure [7].In a geologic context, halloysite can form in many environments where conditions are present for its formation, including volcanic, tropical, and glacial environments.It is common in many rock types, including volcanic, sedimentary, and hydrothermal rocks [8,9].The bestknown deposits of halloysite are found in New Zealand, the United States [10], France [11], and Turkey [12].In New Zealand [13], halloysite is mined from rhyolite rocks in the Matauri Bay deposit [14,15].Halloysite from New Zealand is used mainly in the production of high-quality tableware because of its high whiteness and translucency [16].
Halloysite nanotubes are a readily available and cost-effective raw material for various industries, including agriculture, crop processing, and food processing [37][38][39][40].Halloysite exhibits diverse polar charges on its basal planes, including macro-, meso-, and micropores, which make it a suitable medium for transporting plant growth regulators and micronutrients.Zinc has been identified as a prospective filler material for nanotubes.
Zinc plays a crucial role in plant nutrition, participating in various physiological and biochemical processes, including photosynthesis and the synthesis of growth hormones [41][42][43].Zinc deficiency leads to slower plant growth and reduces plant resistance to fungal diseases [44].Microfertilization with zinc in the form of sulfate depends on the soil's geochemistry and pH.For instance, zinc becomes unavailable to plants in calcareous soils with high phosphorus and organic matter content [45][46][47].Furthermore, due to its mobility in wet soils, zinc, changing into ionic form, can leach into groundwater, which subsequently leads to the saturation of water bodies and negatively affects the fauna of the environment [48].Controlled-release zinc fertilizers can be used to combat these effects [44,[49][50][51].
The purpose of the study is to incorporate zinc into macro-, meso-and micropores of halloysite nanotubes by chemical activation to create composites with targeted delivery of trace elements.
Minerals and Materials
Halloysite nanotube concentrate supplied by Halloysite-Ural LLC (Chelyabinsk, Russia) was used as a mineral raw material in this work.Zinc was chosen as the active component to obtain nanocomposites.A zinc sulfate solution with a zinc concentration of 22% was used as a source of nutrient additive.
Chemical and Mechanochemical Preparation of Nanocomposites
The primary stage in the activation of nanocomposites entailed soaking tubular halloysite crystals in a zinc sulfate solution.A solution containing zinc was prepared by adding a 22% concentration of zinc sulfate to distilled water.Next, 10 mL of the resulting zinc solution was added to 40 g of halloysite nanotubes.The mixture was thoroughly combined and then dried in petri dishes for 48 h at room temperature.Solutions containing different concentrations of zinc sulfate-2%, 20%, and 40% (zinc concentration 0.4, 4, 8%, respectively)-were utilized in the production of nanocomposite fertilizer.This resulted in the creation of Hly-7Å-Zn2, Hly-7Å-Zn20, and Hly-7Å-Zn40 nanocomposites.Technical abbreviations are defined upon their initial usage.
Prior to activation, the original halloysite sample was dried to eliminate extra moisture from the mineral structure.The pre-treatment procedure entailed drying the original halloysite sample in a desiccator at a temperature of 60 • C for a period of 6 h.
Characterization of the Nanocomposites
In order to investigate the key parameters of the produced nanocomposites and verify the intercalation of zinc into halloysite nanotube structures, a range of laboratory and analytical investigations were conducted which included Fourier transform infrared spectroscopy (FTIR), laser Raman spectroscopy, X-ray diffraction analysis (XRD), scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM-EDS), and transmission electron microscopy (TEM) with selected area electron diffraction (SAED).
X-ray diffraction analysis was performed to ascertain the bulk mineral composition of the initial halloysite sample and nanocomposites and to gauge the interlayer spacing in halloysite crystals.The analysis was conducted on a Rigaku Ultima IV diffractometer, utilizing a Cu Kα anode at a voltage of 40 kV and a current of 30 mA.Diffraction patterns were obtained in the angle range of 3-65 • on a 2θ scale with a rate of 1 • per minute and a step of 0.02 • , thereby enabling the determination of the crystal structure and interplanar spacing of halloysite.
Scanning electron microscopy was utilized to examine the microstructural characteristics and chemical composition of the nanocomposites.The TESCAN Vega 3 SBU scanning electron microscope (Teskan, Brno, Czech Republic) with an OXFORD X-Max 50 energy dispersive X-ray microanalysis detector (Oxford Instruments, Abingdon, UK) was used for the investigation.
Imaging parameters comprised an accelerating voltage within the range of 10-20 kV, a sample current ranging from 3 to 12 nA, a focal length within the range of 5-15 mm, and operation in full vacuum mode.The analyzed samples were dried crumbly specimens of nanocomposites and starting material.In addition, plant leaves treated with water containing nanocomposites were analyzed.This analysis enabled us to identify the microstructural characteristics and chemical composition of the nanocomposites, as well as the interaction between plant tissues and halloysite tubes.
Transmission electron microscopy (TEM) was conducted to examine the structure of halloysite nanotubes pre-and post-activation while visually assessing the existence of zinc in the nanotubes' central region.A JEOL JEM-2100F microscope (JEOL, Tokyo, Japan), with an accelerating voltage of 200 kV, was used for the TEM study.The analytical samples were prepared by converting the ground nanocomposites into a fine powder and then depositing the powder on a copper grid that had been precoated with a carbon film.This technique enabled us to acquire TEM images of the nanocomposites, complemented by local electron diffraction, which further confirmed the structural variations.
IR spectroscopy was utilized to identify chemical bonds and functional groups within the nanocomposites.The spectra were acquired using a Shimadzu FTIR 8400S IR spectrometer in Kyoto, Japan, within the 4000 to 400 cm −1 wave number range.The DLATGS detector and KBr pellets provided a resolution of 4 cm −1 , thereby enabling the analysis of the nanocomposites' chemical composition and functional groups.Abbreviations and technical terms shall be explained when used for the first time.
Laser Raman spectroscopy was carried out using a Thermo Scientific Fisher DXR2 spectrometer (Thermo Electron Scientific Instruments LLC, Madison, WI, USA) at a laser wavelength of 785 nm and a power of 10-15 mW.Repeated acquisitions were accumulated to improve the signal-to-noise ratio in the spectra with five 10-s scans in the range 0-3300 cm −1 .
Experimental Methods
To assess the impact of nanocomposites and plants, the flower surface was sprayed with a solution containing Hly-7Å-Zn40 nanocomposite, which was prepared by combining 10 g of the nanocomposite with 0.5 L of distilled water.The resulting mixture was sprayed on the flower from a distance of 10-15 cm from a household sprayer set to fine spraying mode.Also, for comparative analysis, wash-off tests simulating rain were performed by spraying the Hly-7Å-Zn40 nanocomposite and zinc sulfate solution onto pre-moistened plant leaves.Following the application, the plants were kept under normal room tempera-ture conditions for 24 h.After this period, the leaves were cautiously taken off from the plants and scrutinized via scanning electron microscopy (SEM).
Nanocomposite Morphology
The initial halloysite concentrate is a concentration of chaotically oriented nanotubes with a length of less than 5 µm and an average diameter of about 96 nm.Morphometric analysis of SEM images was performed to analyze in more detail the morphological changes of the nanotubes after activation.For each nanocomposite, observations were made in 10 sections.The initial width of the tubes averaged between 79 and 114 nm (representing the first and third quartiles).After chemical activation, the increase in the average crystal diameter reaches 127-132 nm (Supplementary Materials, Table S1).The maximum diameter values are observed in the Hly-7Å-Zn40 nanocomposite and reach 382 nm (Figure 1C).These results confirm the morphological changes of the nanotubes after chemical activation.
Structural Characteristics of Nanocomposites
The X-ray diffraction pattern of the original halloysite indicates the presence of basal reflections characteristic of halloysite, kaolinite, sanidine, and quartz.Basal reflections at 10.0, 7.2, 5.1, 4.5, 4.1, and 3.7 Å correspond to basal reflections of 7 Å and 10 Å-modified halloysite and kaolinite (Figure 2).After activation, the appearance of the first basal reflection (001) at 10.3 Å is observed in the diffractograms of Hly-7Å-Zn2 and Hly-7Å-Zn20 nanocomposites.The Hly-7Å-Zn20 nanocomposite exhibits an increase in basal reflection towards greater interplanar distances, reaching up to 7.3 Å.Meanwhile, the Hly-7Å-Zn40 nanocomposite shows maximum basal reflection shifts at 10.5 and 7.4 Å at the highest concentration of zinc solution (Figure 2).In the local electron diffraction (SAED) patterns in the TEM image, the nanotube nanocomposites are characterized by increased interlayer distances relative to the original halloysite (Figure 3).The thickness of the crystal packet from 7.2 to 7.4 Å increases as the solution concentration increases.In the Hly-7Å-Zn2 composite, no increase in the interlayer spacing is observed.An increase in the interlayer spacing is observed in Hly-7Å-Zn20 and Hly-7Å-Zn40 composites.These data agree with the X-ray diffractogram data.TEM images also demonstrate that the central voids in the activated nanotubes (Figure 3B-D) exhibit diminished translucency compared to those of the initial halloysite (Figure 3A).
The infrared spectra of the obtained composites are characterized by strain fluctuations in the range of 3694-3696, 3667-3669, 3651 and 3665 cm −1 , corresponding to the characteristic groups of the inner surface and inner Al-OH-Al (Figure 4).It is important to note that the strain fluctuations associated with the inner surface OH ions become less pronounced starting from the Hly-7Å-Zn20 composite.It is also observed that the peaks at 3445, 3449, 3451, 3455, 1636, 1630, and 1400-1468 cm −1 represent adsorbed water, and an increase in the intensity of the stretching vibrations is observed with increasing solution concentration.At the same time, the absorption bands in the C-H group of the alkylammonium, which are in the range of 3000-2800 cm −1 , remain almost constant in intensity, as do the symmetric and asymmetric vibrations in the low-frequency range.The Raman spectra of the initial halloysite (Hly-7Å) and activated halloysite (Hly-7Å-Zn2) at the minimum concentration are similar (Figure 5).However, Hly-7Å-Zn20 and Hly-7Å-Zn40 showcase new peaks, corresponding to Si-O-Al translation modes at 706 cm −1 and the libration mode of the inner Al-OH groups at 910 cm −1 .The characteristic peaks of zinc sulfate (980 cm −1 ) do not overlap in the spectra of activated nanocomposites (Figure 5).The intensity of the peaks for the 127 cm −1 v 2 (Al-O) and 460 cm −1 v 4 (Si-O) bands increases with increasing zinc concentration in the nanocomposites (Figure 5).
Chemical Composition of Nanocomposites
According to the results of the EDS analysis, the mean constitution of halloysite is as follows: Al 2 O 3 42.2-43.8%,SiO 2 54.0-55.8%,K 2 O 0.3-1.01%,and Fe 2 O 3 (total) 0.9-1.4% (Supplementary Materials, Table S2).The existence of adsorbed zinc on the surface of the nanotubes in the Hly-7Å-Zn2 and Hly-7Å-Zn20 nanocomposites is not discernible, while zinc signals on the spectra were recorded.The bulk composition analysis of Hly-7Å-Zn40 reveals the presence of zinc within the range of 0.7-1.4wt.%.
note that the strain fluctuations associated with the inner surface OH ions become less pronounced starting from the Hly-7Å-Zn20 composite.It is also observed that the peaks at 3445, 3449, 3451, 3455, 1636, 1630, and 1400-1468 cm −1 represent adsorbed water, and an increase in the intensity the stretching vibrations is observed with increasing solution concentration.At the same time, the absorption bands in the C-H group of the alkylammonium, which are in the range of 3000-2800 cm −1 , remain almost constant in intensity, as do the symmetric and asymmetric vibrations in the low-frequency range.The Raman spectra of the initial halloysite (Hly-7Å) and activated halloysite (Hly-7Å-Zn2) at the minimum concentration are similar (Figure 5).However, Hly-7Å-Zn20 and Hly-7Å-Zn40 showcase new peaks, corresponding to Si-O-Al translation modes at 706 cm −1 and the libration mode of the inner Al-OH groups at 910 cm −1 .The characteristic peaks of zinc sulfate (980 cm −1 ) do not overlap in the spectra of activated nanocomposites (Figure 5).The intensity of the peaks for the 127 cm −1 v2 (Al-O) and 460 cm −1 v4 (Si-O) bands increases with increasing zinc concentration in the nanocomposites (Figure 5).
Chemical Composition of Nanocomposites
According to the results of the EDS analysis, the mean constitution of halloysite is as follows: Al2O3 42.2-43.8%,SiO2 54.0-55.8%,K2O 0.3-1.01%,and Fe2O3(total) 0.9-1.4% (Supplementary Materials, Table S2).The existence of adsorbed zinc on the surface of the nanotubes in the Hly-7Å-Zn2 and Hly-7Å-Zn20 nanocomposites is not discernible, while zinc signals on the spectra were recorded.The bulk composition analysis of Hly-7Å-Zn40 reveals the presence of zinc within the range of 0.7-1.4wt.%.
Interaction of Nanotubes with the Plant Surface
A detailed study of the interaction of sputtered halloysite and zinc sulfate nanotubes with the surface of plant tissues was carried out using SEM.High-resolution images (Figure 6A,B) clearly show that halloysite nanotubes are attached to the leaf as "needles".EDS investigation identified zinc signals on the plant surface (Figure 6A,B).After conducting washing tests, a significant number of nanotubes remained on the surface of the plant tissue, and the zinc sulfate had almost wholly disappeared (Figure 6B).The field of view measuring 100 µm shows the even distribution of zinc on the plant's surface (Figure 6A,B).
Interaction of Nanotubes with the Plant Surface
A detailed study of the interaction of spu ered halloysite and zinc sulfate nanotubes with the surface of plant tissues was carried out using SEM.High-resolution images (Figure 6A,B) clearly show that halloysite nanotubes are a ached to the leaf as "needles".EDS investigation identified zinc signals on the plant surface (Figure 6A,B).After conducting washing tests, a significant number of nanotubes remained on the surface of the plant tissue, and the zinc sulfate had almost wholly disappeared (Figure 6B).The field of view measuring 100 µm shows the even distribution of zinc on the plant's surface (Figure 6A,B).
Discussion
Halloysite is not a typical mineral for creating fertilizers.The encapsulation of zinc into halloysite nanotubes using zinc nitrate and borate as a solution [52,53] has been previously discussed to create refractory and anticorrosion coatings or preparations with antimicrobial properties [54].Halloysite has also been tested as a sorbent for the uptake of zinc and other heavy metals from polluted waters [55,56].The efficacy of targeting nutrient delivery to plants using biocomposites based on modified halloysite nanotubes has also been reported [57,58].The positive results seen in the intercalation and adsorption of zinc compounds into halloysite nanotubes, which have demonstrated potential for diverse applications, served as the precursor for this study.
The design of the authors' work was focused on creating composites of targeted action with the possibility of using them by spraying them on plants, i.e., in the agroindustry.The presence of sharp or fractured edges on halloysite nanotubes facilitates a robust adhesion to plant tissues, enabling a targeted administration of micronutrients (such as zinc) through a "poking" or "injection" mechanism.This feature prevents the undesired removal of these nutrients by rain or irrigation water.The images from the SEM obtained of the plant tissue surface treated with nanocomposite fertilizer demonstrate the observable impact of nanotubular halloysite particles, as shown in (Figure 6A).After washing experiments with rain-simulating water, most of the nanotubes remained attached to the plant tissue surface (Figure 6B).Conversely, the sprayed zinc sulfate was almost entirely removed from the leaf surface (Figure 6C).It was observed that premoistening the leaf surface did not stop the halloysite nanotubes from sticking to it.
The peculiarity of the chemical activation in this work is using zinc sulfate as a solution.According to XRD data in the activated composites, as the concentration of zinc sulfate in the reagent solution increases, a shift of the first basal reflex by about 0.2 Å in the direction of the crystal lattice expansion is observed (Figure 2), which shows the adsorption of zinc in the micropores of the mineral.The expansion of the crystal lattice of halloysite after the activation experiments is also confirmed in TEM images from local electron diffraction patterns (Figure 3).Besides the increase in the interlayer spacing, new basal peaks at 10.3 and 10.5 Å are observed, indicating an increase in the thickness of the 10 Å interlayer spacing of halloysite, which overlapped with the first basal peak of kaolinite (9.95 Å).The last is also confirmed by the increasing intensity of the new basal reflex as the zinc solution concentration increases.At first glance, the interplanar distance increased insignificantly, but according to the morphometry of halloysite nanotubes on SEM images, their width increases by an average of 33 nm as the concentration of the initial reacting solution increases.The linear increase in the average width of the nanotubes shows the expansion of the interlayer distance of the halloysite due to the introduction of the zinc substance into the meso-micro pores (Supplementary Materials, Table S1).Another crucial aspect is that the SEM image (Figure 3A) shows that most of the nanotubes in the original halloysite have a transparent central part.Conversely, this is not the case for the activated composites; the central part of the nanotubes is not transparent (Figure 3B-D).This implies that aside from the interlayer space, the zinc substance also occupies the central part.
During activation, water is used to dissolve zinc sulfate, and it follows that water will inevitably adsorb to the inner and outer surfaces of the crystal structures during activation.However, in the 3400 and 3500 cm −1 range, the intensity of vibrations was observed as the concentration of zinc sulfate increased, while the amount of water in the reagent did not change.Thus, the effect of water on the crystal lattice can be ruled out.From this, it can be concluded that the increased vibration intensity associated with adsorbed water is due to the increase in total surface area for aqueous compounds.This conclusion can be related to the increase in interlayer distance.It can also be concluded that zinc ions penetrate more intensively into halloysite structures than water ions.Otherwise, no changes would be seen in the IR spectrometry data.The disappearance of the peak (3667 cm −1 ) responsible for the O-H stretching of internal hydroxides and the formation of an almost monopic peak at 3655 cm −1 in Hly-7Å-Zn20 and Hly-7Å-Zn40 nanocomposites are attributed to the hydrolysis of the silinol groups adsorbed on the surface of halloysite nanotubes [52,[59][60][61].This indicates structural modifications of the nanotubes and confirms the activation of halloysite by zinc sulfate.The appearance of new peaks at 706 and 910 cm −1 in Raman spectra indicates the intercalation of zinc ions into the halloysite structure.Their intensity also increases with increasing concentration of zinc fraction in the nanocomposites (Figure 5) [62,63].The absence of characteristic peaks of zinc sulfate in the spectra of the nanocomposites confirms the absence of adsorption of independent forms of zinc sulfate on the surface of halloysite nanotubes.
According to SEM-EDS data, the amount of zinc adsorbed on the nanotube surface in activated nanocomposites with 2% and 20% zinc sulfate solutions did not stay the same.At the same time, there were intense zinc signals on the spectrum.However, the interlayer distance in the Hly-7Å-Zn20 nanocomposite increased to 7.3 Å, indicating the incorporation of zinc into the mineral structure.The inner and outer walls of halloysite nanotubes exhibit pH-dependent positive and negative charges [34,37].In acidic environments, the nanotubes' outer surface develops positive charges due to deprotonation [37].Conversely, the inner surface acquires positive charges from outer wall protonation in alkaline environments.It should be noted that under highly acidic or alkaline conditions the nanotube structures degrade [64].The solution was prepared by dissolving zinc sulfate in distilled water with an approximate neutral pH of 5.0-7.0.Under these circumstances, correspondingly, Zn + cations and SO 4 2− anions from dissolved zinc sulfate will be drawn to the opposing charges of the inner and outer surfaces of the nanotubes (Figure 7).Under this environment, the surface of halloysite nanotubes has a neutral or slightly positive charge (Figure 7) [64].On the other hand, plant cell membranes are negatively charged, which, according to the laws of electrostatic interactions, favors the penetration into plant tissues and the slow release of positively charged Zn + ions [65].The presence of a modest positive charge on the surface of the nanotubes may have prevented random adsorption onto the surface.This may explain why the nanotubes took up the whole Hly-7-Zn20 nanocomposite solution, including the large, medium, and small pores.At a 40 mol% zinc sulfate concentration, the Hly-7-Zn40 nanocomposite had 1.4 wt% of zinc on the surface of the nano-needle particles.Also, no independent zinc compounds were observed upon detailed examination of the activated nanotubes via SEM.
environment, the surface of halloysite nanotubes has a neutral or slightly positive charge (Figure 7) [64].On the other hand, plant cell membranes are negatively charged, which, according to the laws of electrostatic interactions, favors the penetration into plant tissues and the slow release of positively charged Zn + ions [65].The presence of a modest positive charge on the surface of the nanotubes may have prevented random adsorption onto the surface.This may explain why the nanotubes took up the whole Hly-7-Zn20 nanocomposite solution, including the large, medium, and small pores.At a 40 mol% zinc sulfate concentration, the Hly-7-Zn40 nanocomposite had 1.4 wt% of zinc on the surface of the nano-needle particles.Also, no independent zinc compounds were observed upon detailed examination of the activated nanotubes via SEM.The absence of adsorbed zinc in the Hly-7Å-Zn20 nanocomposite increased the interlayer distance.Conversely, in Hly-7Å-Zn40, the unadsorbed forms of zinc sulfate were observed to be fixed on the surface of the nanotube particles.The nanotube macropores The absence of adsorbed zinc in the Hly-7Å-Zn20 nanocomposite increased the interlayer distance.Conversely, in Hly-7Å-Zn40, the unadsorbed forms of zinc sulfate were observed to be fixed on the surface of the nanotube particles.The nanotube macropores exhibit a significant adsorption capacity, enabling them to be filled with most of the solution undergoing the reaction.The different adsorption centers of halloysite, encompassing macro-, meso-, and micro-pores, collectively confer targeted and sustained functionality in nanocomposites.
When using the resulting fertilizer through spraying, zinc release can be adjusted by altering the pH of the sprayed water.As previously mentioned, an alkaline environment leads to deprotonation of the inner surface of the halloysite, resulting in positively charged particles that push the zinc ions outward.This characteristic holds significant importance for agriculture and horticulture.By regulating the pH of the spray water, farmers and gardeners can manage the discharge of zinc from fertilizer into crops.For instance, in alkaline soils, which generally have insufficient zinc levels for plants, the fertilizer can be altered to facilitate the efficient release of zinc in an adequate amount.On the other hand, in acidic soils where zinc solubility is high, zinc release can be reduced to avoid excess accumulation, which could be detrimental to plants and the environment.Therefore, the ability to adjust zinc release based on water pH makes this halloysite and zinc-based fertilizer a controlled tool to optimize plant nutrition and increase yields while minimizing adverse environmental impact.
Despite the many advantages of fertilizers based on halloysite nanotubes, it is essential to consider the potential drawbacks.One of these is that excessive exposure of halloysite to the soil, whether through fertilizer spraying or rainwater runoff, can lead to accumulation of halloysite in the soil.
While halloysite mineral itself does not present any threat to the environment, its existence in soil can have an impact on the chemical equilibrium.Halloysite can increase the pH of the soil, making it more alkaline [66,67].His alteration in pH can be disadvantageous for certain plant species, particularly those that thrive in acidic conditions.Based on this, it is essential to consider soil and plant characteristics before using halloysite nanocomposites as fertilizer.
Conclusions
The examination of activated zinc-containing nanocomposites and their comparison with the original zinc-containing nanocomposites resulted in the following conclusions.
(1) The study confirms the potential for zinc intercalation into the meso-microporous spaces of halloysite.It was observed that the minimum concentration of zinc sulfate solution required for this is 20%.(2) The interaction of halloysite with zinc sulfate is contingent on the concentration of the sulfate solution, affecting both the location and shape of the incorporated zinc within the halloysite structure.Complete absorption of zinc within the nanotube structure is observed upon activation of the halloysite using a 20% zinc sulfate solution.Conversely, when a more concentrated solution (40% zinc sulfate) is used, zinc adsorption in sulfate on the tube surface is observed.This phenomenon suggests a high sulfate concentration, leading to an optimal solution concentration between 0 and 40%.(3) The intercalation of zinc into the macro-, meso-, and micropores of the halloysite is evident in the subsequent enlargement of the average nanotube diameter and interlayer distance.An increase in the zinc concentration within the solution results in a more substantial increase in the nanotube diameter, thereby signifying a direct correlation between the concentration of infiltrated zinc in the halloysite structure and that in the solution.Furthermore, the successful intercalation is corroborated by PEM data.In the activated nanotubes, the central part of the crystal is opaque, unlike the original halloysite, providing evidence that the tubes are filled with zinc.(4) Halloysite nanotubes possess a distinctive morphology that enables them to adhere firmly to plant tissues when sprayed on them.These needle-like tubes penetrate and remain on the surface of the leaves, providing a gradual release of zinc and nutrients for the plant.A significant benefit of this technique is that halloysite nanotubes are not washed away by rainwater, unlike fertilizers in the soil.(5) The surface spraying of halloysite nanotubes permits precise delivery of zinc to plants, avoiding contaminating soil and groundwater, thus rendering the proposed fertilizer environmentally friendly.This technique can play a significant role in sustainable agriculture and environmental preservation.
The study findings suggest that halloysite nanotubes can be used in developing controlled-release zinc fertilizers with low environmental impact.This presents new opportunities for producing efficient and eco-friendly fertilizers for agriculture.
Figure 6 .
Figure 6.SEM image of nanotubes (A,B) and map of zinc distribution on the plant tissue surface (A-C).Yellow points-zinc signals; Hly-halloysite nanotubes.
Figure 7 .
Figure 7. Scheme illustration of the process of zinc incorporation into halloysite nanotube.
Figure 7 .
Figure 7. Scheme illustration of the process of zinc incorporation into halloysite nanotube. | 6,497.4 | 2023-10-01T00:00:00.000 | [
"Materials Science"
] |
A Broadband THz On-Chip Transition Using a Dipole Antenna with Integrated Balun
Abstract: A waveguide-to-microstrip transition is an essential component for packaging integrated circuits (ICs) in rectangular waveguides, especially at millimeter-wave and terahertz (THz) frequencies. At THz frequencies, the on-chip transitions, which are monolithically integrated in ICs are preferred to off-chip transitions, as the former can eliminate the wire-bonding process, which can cause severe impedance mismatch and additional insertion loss of the transitions. Therefore, on-chip transitions can allow the production of low cost and repeatable THz modules. However, on-chip transitions show limited performance in insertion loss and bandwidth, more seriously, this is an in-band resonance issue. These problems are mainly caused by the substrate used in the THz ICs, such as an indium phosphide (InP), which exhibits a high dielectric constant, high dielectric loss, and high thickness, compared with the size of THz waveguides. In this work, we propose a broadband THz on-chip transition using a dipole antenna with an integrated balun in the InP substrate. The transition is designed using three-dimensional electromagnetic (EM) simulations based on the equivalent circuit model. We show that in-band resonances can be induced within the InP substrate and also prove that backside vias can effectively eliminate these resonances. Measurement of the fabricated on-chip transition in 250 nm InP heterojunction bipolar transistor (HBT) technology, shows wideband impedance match and low insertion loss at H-band frequencies (220–320 GHz), without in-band resonances, due to the properly placed backside vias.
Introduction
The terahertz (THz) wave is generally referred to as the frequency band from 0.1 THz to 10 THz, corresponding to the wavelength in air from 3 mm to 0.03 mm [1].This frequency band is often called a THz gap, because it has been known to be difficult to generate and detect signals at this frequency band using electronic or optical technologies.Recently, there has been extensive research on THz applications in various fields, such as high-speed communications, non-destructive inspections, spectroscopy, and medical imaging [2][3][4].THz monolithic integrated circuits (TMICs), such as power amplifiers, multipliers, mixers, and antennas, have been successfully developed using advanced transistor technologies, such as a complementary metal oxide semiconductor (CMOS), gallium arsenide (GaAs) high-electron mobility transistors (HEMTs), and indium phosphide (InP) heterojunction bipolar transistors (HBTs) [5][6][7][8][9][10].These semiconductor-based technologies allow the production of low-cost, compact, portable, and mass-producible THz systems.
In order to build practical THz systems, it is essential to package the developed TMICs into waveguide modules.Especially rectangular waveguides that are well-suited for THz transmission lines, as they allow low loss and easy fabrication compared with the coaxial cables.There can exist many electromagnetic (EM) modes in rectangular waveguides, the dominant mode with lowest cut-off frequency being a transverse electric (TE 10 ) mode.However, TMICs are implemented with planar transmission lines, such as microstrip lines and coplanar waveguides in a quasi-transverse electromagnetic (TEM) mode.Therefore, transition with a low loss and broad bandwidth is an indispensable component of converting the modes of EM fields between TMICs and rectangular waveguides.
There are several publications on rectangular waveguide-to-microstrip transitions at THz frequencies [11][12][13][14][15]. Off-chip transitions are designed using thin substrates with low loss and low dielectric constant (ε r ), such as 50-µm-thick quartz with ε r = 3.8, allowing a wideband low-loss performance [14,15].However, they should be electrically connected to TMICs using bond-wires, which lead to parasitic components and result in impedance mismatches and additional losses.In order to minimize the performance degradation of bond-wires, the transitions can be monolithically integrated into TMICs, which are called on-chip transitions [16].On-chip transitions eliminate the wire-bonding process and additional interconnection lines, which result in compact, low loss, low cost, and highly repeatable THz modules.
However, on-chip transitions can exhibit limited performance.The semi-conductor substrates used in TMICs generally exhibit a high dielectric constant and high loss tangent, compared with those used in off-chip transitions.Note that the waveguide size becomes very small at a THz frequency in order to eliminate higher order EM modes.For example, an internal size of WR-03 waveguide for H-band (220-320 GHz) is only 860 µm × 430 µm, therefore a very thin substrate is preferred for the transitions, which are inserted inside the waveguides.In other words, the transition designed in the thick substrate with a high dielectric constant can increase the impedance mismatch between air-filled waveguide and transition.It can also generate higher order EM modes inside the transition or waveguide, significantly degrading the transition performance.In the work published by Zamora et al. [17], an on-chip transition was designed on very thin (25 µm) InP substrate in a WR-4.3 waveguide (operating frequency = 170-260 GHz, waveguide size = 1.092 mm × 0.546 mm).However, the very thin substrate can cause wafer-handling problems and high fabrication costs.In addition, many backside vias were closely placed in order to reduce the substrate modes in coplanar waveguides, without the detailed analysis of the resonances in the InP substrate [17].
In this work, we design an on-chip rectangular waveguide-to-microstrip transition on a thick InP substrate at the H-band, using a dipole antenna with an integrated balun.We analyze the resonance problem created by the InP substrate using a three-dimensional (3-D) EM simulator.It is demonstrated that the transition can experience severe performance degradation from the resonances induced in the InP substrate.It is also shown that the resonances can be effectively eliminated by properly placing backside vias.Finally, the designed transition was fabricated and measured to show low-loss and resonance-free performance across full H-band frequencies.
Design of the On-Chip Transition with an Integrated Balun
We designed the waveguide-to-microstrip transition using a 250 nm InP HBT process, so that it can be easily integrated with TMICs developed using the same processes [16].Figure 1a shows the layer structure of this process where active circuits reside on the 75 µm-thick InP substrate (ε r = 12.9).Therefore, the backside via connecting the ground of active or passive circuits with the backside ground creates high parasitic inductance and resistance, which leads to severe performance degradation of TMICs at THz frequencies.Generally, one of the front metal layers is utilized as a ground plane of TMICs to minimize parasitic effects [8].In this work, the third metal layer (M3) is selected as a ground plane, while the first metal layer (M1) is used on the InP substrate as a signal line of microstrip lines connecting the circuit components, such as transistors, resistors, and capacitors.The intermediate metal layer (M2) is omitted in this figure for simplicity.Therefore, the inverted microstrip line is formed between M1 and M3, with the inter-dielectric layer of 3 µm-thick benzocyclobutene (BCB).Several TMICs, such as power amplifiers, were successfully fabricated using this inverted microstrip configuration [10,18].The transition in Figure 1 can be modeled as its equivalent circuit as shown in Figure 2 [20].The coupling between CPS and microstrip line is represented by an ideal transformer with an impedance transformation ratio of 1:n 2 .The length of the CPS line (Ls), or the distance between radiator and reflector, is generally determined to be around 0.2× the guided wavelength.According to the design theory of the dipole antenna [22], so that Zsc is expected to provide a high impedance (or open in an ideal case).The microstrip open stub is approximately a quarter-wave long and provides a short circuit to one terminal of the transformer.In this way, the CPS-to-microstrip line provides a function of balun (converting the balanced signal in CPS to an unbalanced one in the microstrip line).The Figure 1b shows the proposed H-band waveguide-to-microstrip transition on the InP substrate using the dipole antenna, which allows for a compact size, simple structure, broad bandwidth characteristics, and alignment of input and output waveguide ports.A detailed layout of the transition is described in Figure 1c, consisting of a dipole radiator, coplanar-strip (CPS) line, reflector, balun, and microstrip line.The rectangular waveguide (WR-03 at H-band) operates at its dominant mode (TE 10 ), presenting electric field intensity parallel to the E-plane of the waveguide [19].Its EM fields are captured by the dipole radiator (approximately a half-wave long) inducing a differential signal across CPS.The dipole radiator is designed on M3 and connected to the ground plane of microstrip line through CPS line.The ground plane of the microstrip and the metallic pedestal underneath support the transition substrate to operate as a reflector of the dipole antenna.The signals on CPS are then converted to the quasi-TEM of the microstrip line, due to the CPS-to-microstrip transition, which performs the function of the balun, as well as converting differential signals to single-ended signals [20,21].
The transition in Figure 1 can be modeled as its equivalent circuit as shown in Figure 2 [20].The coupling between CPS and microstrip line is represented by an ideal transformer with an impedance transformation ratio of 1:n 2 .The length of the CPS line (L s ), or the distance between radiator and reflector, is generally determined to be around 0.2× the guided wavelength.According to the design theory of the dipole antenna [22], so that Z sc is expected to provide a high impedance (or open in an ideal case).The microstrip open stub is approximately a quarter-wave long and provides a short circuit to one terminal of the transformer.In this way, the CPS-to-microstrip line provides a function of balun (converting the balanced signal in CPS to an unbalanced one in the microstrip line).The impedances indicated in Figure 2 at each point can be calculated using the transmission line theories as follows [19].
The properties of transmission lines are represented by characteristic impedances (Z s or Z m ), propagation constants (β s or β m ), and lengths (L s or L m ).The subscript s and m represent CPS and microstrip line, respectively.
The properties of transmission lines are represented by characteristic impedances (Zs or Zm), propagation constants (βs or βm), and lengths (Ls or Lm).The subscript s and m represent CPS and microstrip line, respectively.
Based on the equivalent circuit model, the dimensions of the transition are mainly determined from the 3-D EM simulation using Ansoft HFSS.Firstly, the simulation is performed on the transition without M1 signal lines to determine the dimensions of the dipole radiator (Wd and Ld), CPS (Ws, Ss, Ls1), and the distance of the dipole radiator from the reflector (Ls).In this simulation, the waveguide is set to port 1 and a differential 50 Ω port 2 is set up across two strips at the distance of Ls1 from the dipole radiator, as illustrated in Figure 2. The latter node is referred to as a feed point of the dipole antenna.The dimensions are then determined from the EM simulation to provide low insertion loss (10log|S21| 2 ) and good impedance match or return loss (−10log|S11|).Figure 3 shows the simulation results of this structure using the determined dimensions given in Table 1, exhibiting insertion loss of 0.54 dB and return loss greater than 15 dB at 280 GHz. Figure 4 shows the simulated Zt1 (impedance at the feed point), indicating that the dipole antenna was properly designed, allowing a broadband impedance match.Based on the equivalent circuit model, the dimensions of the transition are mainly determined from the 3-D EM simulation using Ansoft HFSS.Firstly, the simulation is performed on the transition without M1 signal lines to determine the dimensions of the dipole radiator (W d and L d ), CPS (W s , S s , L s1 ), and the distance of the dipole radiator from the reflector (L s ).In this simulation, the waveguide is set to port 1 and a differential 50 Ω port 2 is set up across two strips at the distance of L s1 from the dipole radiator, as illustrated in Figure 2. The latter node is referred to as a feed point of the dipole antenna.The dimensions are then determined from the EM simulation to provide low insertion loss (10log|S 21 | 2 ) and good impedance match or return loss (−10log|S 11 |). Figure 3 shows the simulation results of this structure using the determined dimensions given in Table 1, exhibiting insertion loss of 0.54 dB and return loss greater than 15 dB at 280 GHz. Figure 4 shows the simulated Z t1 (impedance at the feed point), indicating that the dipole antenna was properly designed, allowing a broadband impedance match.
Next, we place a microstrip signal line in M1 above the feed point and extend it along the CPS.One end of the microstrip signal line is terminated with the open circuit and the other microstrip output port of the entire transition.The microstrip interconnection line (Z m2 and L m2 ) to output port is designed to have a characteristic impedance of Z 0 (50 Ω).Following the above procedures, we determine all the dimensions of the transition, which are listed in Table 1. Figure 4 shows the simulation results of the designed waveguide-to-microstrip transition (not back-to-back but single transition).Note that in this simulation, input and output ports are waveguide and microstrip, respectively.The designed transition shows insertion loss less than 1.5 dB and return loss greater than 10 dB between 240 and 312 GHz.
Resonance Problems of the On-Chip Transition
In Section 2, we designed the single on-chip transition with a short microstrip section as shown in Figure 2.However, TMICs need long microstrip lines for interconnection and impedance matching, which can occupy a large InP area.In addition, they usually require transitions at both input and output, when they are packaged as waveguide modules.Considering this situation, we construct back-to-back connected transitions with a 352 μm-long 50 Ω microstrip line, which resides on the metallic pedestal, as shown in Figure 5a.This 50 Ω microstrip line will be replaced with TMICs.Following the above procedures, we determine all the dimensions of the transition, which are listed in Table 1. Figure 4 shows the simulation results of the designed waveguide-to-microstrip transition (not back-to-back but single transition).Note that in this simulation, input and output ports Electronics 2018, 7, 236 6 of 12 are waveguide and microstrip, respectively.The designed transition shows insertion loss less than 1.5 dB and return loss greater than 10 dB between 240 and 312 GHz.
Resonance Problems of the On-Chip Transition
In Section 2, we designed the single on-chip transition with a short microstrip section as shown in Figure 2.However, TMICs need long microstrip lines for interconnection and impedance matching, which can occupy a large InP area.In addition, they usually require transitions at both input and output, when they are packaged as waveguide modules.Considering this situation, we construct back-to-back connected transitions with a 352 µm-long 50 Ω microstrip line, which resides on the metallic pedestal, as shown in Figure 5a.This 50 Ω microstrip line will be replaced with TMICs.
Resonance Problems of the On-Chip Transition
In Section 2, we designed the single on-chip transition with a short microstrip section as shown in Figure 2.However, TMICs need long microstrip lines for interconnection and impedance matching, which can occupy a large InP area.In addition, they usually require transitions at both input and output, when they are packaged as waveguide modules.Considering this situation, we construct back-to-back connected transitions with a 352 μm-long 50 Ω microstrip line, which resides on the metallic pedestal, as shown in Figure 5a.This 50 Ω microstrip line will be replaced with TMICs. Figure 6a shows the simulation results (S-parameters) of the back-to-back transitions of Figure 5.The dimensions of the transition given in Table 1 were also used in this simulation.Figure 6a demonstrates that the performance of the transition can be severely degraded by simply connecting two transitions in the back-to-back configuration, due to several in-band resonances at 272, 289, and 310 GHz.Note that most of these were not found in the simulation results of the single transition (Figure 4). Figure 6b shows the magnitude of electric field intensity in the middle plane of InP substrate at each resonance frequency.These plots illustrate that EM energies are confined in the InP substrate at resonant frequencies, so we can claim that the resonances are related to the dielectric slab.This InP slab is covered with M3 and backside ground planes on the top and bottom, respectively, and waveguide side walls on the upper and lower sides, as shown in Figure 5b.That is, the dielectric slab is covered with metals except for the left and right side walls, and can therefore be viewed as a dielectric-filled rectangular waveguide with reduced height.This may result in several resonant Figure 6a shows the simulation results (S-parameters) of the back-to-back transitions of Figure 5.The dimensions of the transition given in Table 1 were also used in this simulation.Figure 6a demonstrates that the performance of the transition can be severely degraded by simply connecting two transitions in the back-to-back configuration, due to several in-band resonances at 272, 289, and 310 GHz.Note that most of these were not found in the simulation results of the single transition (Figure 4). Figure 6b shows the magnitude of electric field intensity in the middle plane of InP substrate at each resonance frequency.These plots illustrate that EM energies are confined in the InP substrate at resonant frequencies, so we can claim that the resonances are related to the dielectric slab.This InP slab is covered with M3 and backside ground planes on the top and bottom, respectively, and waveguide side walls on the upper and lower sides, as shown in Figure 5b.That is, the dielectric slab is covered with metals except for the left and right side walls, and can therefore be viewed as a dielectric-filled rectangular waveguide with reduced height.This may result in several resonant frequencies depending on a, b, and d, which are the width, height, and length of the dielectric-filled rectangular waveguide, as designated in Figure 5b.
These in-band resonances should be removed so that they do not degrade the performance of TMICs modules when the TMICs are packaged using the transitions.In this work, we utilize the backside vias connecting M3 and backside ground planes to remove the resonances in the dielectric slab.For example, if four backside vias are placed in the dielectric slab as shown in Figure 7a, almost all of the in-band resonances shown in Figure 6 are eliminated, as shown in Figure 7b.This result indicates that the backside vias can suppress the resonances by electrically shorting the fields in the dielectric-filled waveguide.However, there is still a resonance at 273 GHz.This seems to be caused by a rectangular cavity as indicated in Figure 7a, consisting of four backside vias serving as shorting posts of the dielectric-filled waveguide.The resonant frequency of the rectangular cavity filled with a dielectric can be calculated using Equation ( 6) [23].
Electronics 2018, 7, 236 where µ and ε denote the permeability and permittivity of the dielectric.There can exist a number of resonance modes depending on the integer values of m, n, and p. Electric field distribution at resonant frequency is plotted in Figure 7a, which is similar to that of TE 101 mode (which is a dominant mode in this structure).The four vias are placed apart by 120 and 200 µm in x-and z-axis directions, respectively, which creates the rectangular cavity with a = 430 µm, b = 79.8µm, and d = 160 µm (with the via diameter considered).The resonant frequency can be calculated to be 278 GHz from Equation ( 6), which is very close to the simulation result in Figure 7.These in-band resonances should be removed so that they do not degrade the performance of TMICs modules when the TMICs are packaged using the transitions.In this work, we utilize the backside vias connecting M3 and backside ground planes to remove the resonances in the dielectric slab.For example, if four backside vias are placed in the dielectric slab as shown in Figure 7a, almost all of the in-band resonances shown in Figure 6 are eliminated, as shown in Figure 7b.This result indicates that the backside vias can suppress the resonances by electrically shorting the fields in the dielectric-filled waveguide.However, there is still a resonance at 273 GHz.This seems to be caused by a rectangular cavity as indicated in Figure 7a, consisting of four backside vias serving as shorting posts of the dielectric-filled waveguide.The resonant frequency of the rectangular cavity filled with a dielectric can be calculated using Equation ( 6) [23].
where μ and ε denote the permeability and permittivity of the dielectric.There can exist a number of resonance modes depending on the integer values of m, n, and p. Electric field distribution at resonant In order to remove all of the resonances, the vias should be placed such that the TE 101 mode resonant frequency of the cavity exceeds the upper-edge of the H-band.A cavity with a = d = 185 µm and b = 79.8µm exhibits the resonance frequency of 319 GHz according to Equation ( 6).This implies that there will be no in-band resonance if the vias are placed such that the cavity size (a and d) is smaller than 185 µm.
Figure 8a shows the final design of the back-to-back on-chip transition with six backside vias placed properly, such that there is no in-band resonance (with the respective spacing of vias of 120 and 100 µm in x-and z-directions).Typical DC bias and ground pads of TMICs are also included in the layout.The metallic pedestal is drawn to have rounded corners considering the limitation of mechanical machining.Figure 8b shows the simulation results of the final design of the on-chip dipole transition.It shows a back-to-back insertion loss less than 3.0 dB and return loss greater than 8 dB without the resonances from 231 GHz to 314 GHz.There is a small resonance at 318 GHz, which is generated in the space between the dipole radiator and the metallic pedestal, as shown in Figure 8a.The distance between the radiator and reflector can be adjusted to remove this resonance, however this will lead to significant performance degradation of the dipole antenna.
frequency is plotted in Figure 7a, which is similar to that of TE101 mode (which is a dominant mode in this structure).The four vias are placed apart by 120 and 200 μm in x-and z-axis directions, respectively, which creates the rectangular cavity with a = 430 μm, b = 79.8μm, and d = 160 μm (with the via diameter considered).The resonant frequency can be calculated to be 278 GHz from Equation (6), which is very close to the simulation result in Figure 7.
In order to remove all of the resonances, the vias should be placed such that the TE101 mode resonant frequency of the cavity exceeds the upper-edge of the H-band.A cavity with a = d = 185 μm and b = 79.8μm exhibits the resonance frequency of 319 GHz according to Equation ( 6).This implies that there will be no in-band resonance if the vias are placed such that the cavity size (a and d) is smaller than 185 μm. Figure 8a shows the final design of the back-to-back on-chip transition with six backside vias placed properly, such that there is no in-band resonance (with the respective spacing of vias of 120 and 100 μm in x-and z-directions).Typical DC bias and ground pads of TMICs are also included in the layout.The metallic pedestal is drawn to have rounded corners considering the limitation of mechanical machining.Figure 8b shows the simulation results of the final design of the on-chip dipole transition.It shows a back-to-back insertion loss less than 3.0 dB and return loss greater than 8 dB without the resonances from 231 GHz to 314 GHz.There is a small resonance at 318 GHz, which is generated in the space between the dipole radiator and the metallic pedestal, as shown in Figure 8a.The distance between the radiator and reflector can be adjusted to remove this resonance, however this will lead to significant performance degradation of the dipole antenna.In the above simulations of the back-to-back transition, there is a small difference between S 11 and S 22 .This is because, despite the structure being symmetric, the generated meshes from the EM simulator may not be.Note that the field distribution is time-varying and was captured at the time when the resonant field was obviously observed in the simulation.In the above simulations of the back-to-back transition, there is a small difference between S11 and S22.This is because, despite the structure being symmetric, the generated meshes from the EM simulator may not be.Note that the field distribution is time-varying and was captured at the time when the resonant field was obviously observed in the simulation.
Experimental Results
Figure 9a shows the photograph of the fabricated on-chip transition using the 250 nm InP HBT process from Teledyne Technologies (Thousand Oaks, CA, USA).The dipole antennas are fabricated in M3 and the microstrip signal line in M1, under the M3 ground plane.The transition substrate is mounted on the metallic pedestal using a conductive epoxy inside the WR-03 rectangular waveguide, which consists of two split-metallic blocks, as shown in Figure 9b
Experimental Results
Figure 9a shows the photograph of the fabricated on-chip transition using the 250 nm InP HBT process from Teledyne Technologies (Thousand Oaks, CA, USA).The dipole antennas are fabricated in M3 and the microstrip signal line in M1, under the M3 ground plane.The transition substrate is mounted on the metallic pedestal using a conductive epoxy inside the WR-03 rectangular waveguide, which consists of two split-metallic blocks, as shown in Figure 9b.The size of the module is 3.0 cm × 3.0 cm × 3.0 cm.In order to estimate the loss of the rectangular waveguide itself at H-band, a 3 cm-long straight (through) waveguide was also fabricated and measured.
when the resonant field was obviously observed in the simulation.
Experimental Results
Figure 9a shows the photograph of the fabricated on-chip transition using the 250 nm InP HBT process from Teledyne Technologies (Thousand Oaks, CA, USA).The dipole antennas are fabricated in M3 and the microstrip signal line in M1, under the M3 ground plane.The transition substrate is mounted on the metallic pedestal using a conductive epoxy inside the WR-03 rectangular waveguide, which consists of two split-metallic blocks, as shown in Figure 9b.The size of the module is 3.0 cm × 3.0 cm × 3.0 cm.In order to estimate the loss of the rectangular waveguide itself at H-band, a 3 cmlong straight (through) waveguide was also fabricated and measured.The S-parameters of the fabricated transition were measured using the set up shown in Figure 10, where the H-band frequency extender modules, having WR-03 waveguide flanges, are connected to the vector network analyzer.The fabricated transition module is inserted between the waveguide ports of the extender modules.Prior to measuring the fabricated transition, the calibration was performed by using the WR-03 through-reflect-line calibration standards.Figure 11a shows the measured results of the fabricated 3 cm-long straight waveguide.It exhibits an insertion loss of 1.0-1.6 dB and return loss greater than 20 dB at full H-band.Figure 11b,c shows the measurement results of the fabricated back-to-back transition, where insertion loss is less than 4.9 dB and return loss is greater than 12 dB, between 235 GHz and 312 GHz.The measurement shows good agreement with the simulation.The small discrepancy seems to be caused by the errors in the fabrication process, such as the substrate attachment and the waveguide machining.Note that there is no in-band resonance as expected by the simulation.Figure 11b also includes the compensated insertion loss, which is obtained by subtracting the measured insertion loss of the 3 cm-long straight waveguide.The compensated insertion loss is 3.2 dB at 300 GHz.A 352 μm-long 50 Ω microstrip line in the middle of two transitions exhibits a 0.95 dB loss from EM simulation.Thus, the insertion loss per transition can be estimated to be as low as 1.1 dB at 300 GHz, which is low enough to be used for packaging TMICs.In summary, the designed InP on-chip transition using dipole antenna with an integrated balun presents low insertion loss and good return loss over a wide bandwidth at H-band, due to the proposed technique of removing in-band resonances.The S-parameters of the fabricated transition were measured using the set up shown in Figure 10, where the H-band frequency extender modules, having WR-03 waveguide flanges, are connected to the vector network analyzer.The fabricated transition module is inserted between the waveguide ports of the extender modules.Prior to measuring the fabricated transition, the calibration was performed by using the WR-03 through-reflect-line calibration standards.The S-parameters of the fabricated transition were measured using the set up shown in Figure 10, where the H-band frequency extender modules, having WR-03 waveguide flanges, are connected to the vector network analyzer.The fabricated transition module is inserted between the waveguide ports of the extender modules.Prior to measuring the fabricated transition, the calibration was performed by using the WR-03 through-reflect-line calibration standards.Figure 11a shows the measured results of the fabricated 3 cm-long straight waveguide.It exhibits an insertion loss of 1.0-1.6 dB and return loss greater than 20 dB at full H-band.Figure 11b,c shows the measurement results of the fabricated back-to-back transition, where insertion loss is less than 4.9 dB and return loss is greater than 12 dB, between 235 GHz and 312 GHz.The measurement shows good agreement with the simulation.The small discrepancy seems to be caused by the errors in the fabrication process, such as the substrate attachment and the waveguide machining.Note that there is no in-band resonance as expected by the simulation.Figure 11b also includes the compensated insertion loss, which is obtained by subtracting the measured insertion loss of the 3 cm-long straight waveguide.The compensated insertion loss is 3.2 dB at 300 GHz.A 352 μm-long 50 Ω microstrip line in the middle of two transitions exhibits a 0.95 dB loss from EM simulation.Thus, the insertion loss per transition can be estimated to be as low as 1.1 dB at 300 GHz, which is low enough to be used for packaging TMICs.In summary, the designed InP on-chip transition using dipole antenna with an integrated balun presents low insertion loss and good return loss over a wide bandwidth at H-band, Figure 11a shows the measured results of the fabricated 3 cm-long straight waveguide.It exhibits an insertion loss of 1.0-1.6 dB and return loss greater than 20 dB at full H-band.Figure 11b,c shows the measurement results of the fabricated transition, where insertion loss is less than 4.9 dB and return loss is greater than 12 dB, between 235 GHz and 312 GHz.The measurement shows good agreement with the simulation.The small discrepancy seems to be caused by the errors in the fabrication process, such as the substrate attachment and the waveguide machining.Note that there is no in-band resonance as expected by the simulation.Figure 11b also includes the compensated insertion loss, which is obtained by subtracting the measured insertion loss of the 3 cm-long straight waveguide.The compensated insertion loss is 3.2 dB at 300 GHz.A 352 µm-long 50 Ω microstrip line in the middle of two transitions exhibits a 0.95 dB loss from EM simulation.Thus, the insertion loss per transition can be estimated to be as low as 1.1 dB at 300 GHz, which is low enough to be used for packaging TMICs.In summary, the designed InP on-chip transition using dipole antenna with an integrated balun presents low insertion loss and good return loss over a wide bandwidth at H-band, due to the proposed technique of removing in-band resonances.
Conclusions
In this paper, the InP on-chip transition operating H-band frequencies (220-320 GHz), was proposed using the dipole antenna with integrated balun.Its structure was designed to provide wideband impedance match and low insertion loss using a 3-D EM simulator.In-band resonance
Figure 1 .
Figure 1.Proposed on-chip waveguide-to-microstrip transition.(a) Layer structure of 250 nm InP heterojunction bipolar transistor (HBT) process.(b) 3-D view of the transition.(c) Detailed view of the transition substrate.
Figure 1 .
Figure 1.Proposed on-chip waveguide-to-microstrip transition.(a) Layer structure of 250 nm InP heterojunction bipolar transistor (HBT) process.(b) 3-D view of the transition.(c) Detailed view of the transition substrate.
Figure 2 .
Figure 2. Equivalent circuit of the transition.
Figure 2 .
Figure 2. Equivalent circuit of the transition.
Figure 3 .
Figure 3. Simulation results of the designed dipole antenna without microstrip lines (port 1: Waveguide, port 2: Differential port across two strips on a coplanar strip (CPS) line at a feed point).(a) S-parameters; (b) Zt1.Next, we place a microstrip signal line in M1 above the feed point and extend it along the CPS.One end of the microstrip signal line is terminated with the open circuit and the other microstrip output port of the entire transition.The microstrip interconnection line (Zm2 and Lm2) to output port is designed to have a characteristic impedance of Z0 (50 Ω).Dimensions of the microstrip open stub (Wm1 and Lm1) are determined by the EM simulation.From the simulation it is found that the approximately quarter-wave long open stub with low characteristic impedance (about 35 Ω) allows for the most wideband impedance matches.Following the above procedures, we determine all the dimensions of the transition, which are listed in Table1.Figure4shows the simulation results of the designed waveguide-to-microstrip transition (not back-to-back but single transition).Note that in this simulation, input and output ports are waveguide and microstrip, respectively.The designed transition shows insertion loss less than 1.5 dB and return loss greater than 10 dB between 240 and 312 GHz.
Figure 3 .
Figure 3. Simulation results of the designed dipole antenna without microstrip lines (port 1: Waveguide, port 2: Differential port across two strips on a coplanar strip (CPS) line at a feed point).(a) S-parameters; (b) Z t1 .
Figure 4 .
Figure 4. Simulation results of the designed transition (single transition).
Figure 4 .
Figure 4. Simulation results of the designed transition (single transition).
Figure 4 .
Figure 4. Simulation results of the designed transition (single transition).
Figure 5 .
Figure 5. Back-to-back structure of the designed on-chip transitions.(a) Top view.(b) Cross-sectional view.
Figure 5 .
Figure 5. Back-to-back structure of the designed on-chip transitions.(a) Top view.(b) Cross-sectional view.
Electronics 2018, 7 ,Figure 6 .
Figure 6.Simulation results of the back-to-back transition.(a) S-parameters.(b) Magnitude of electric field intensity (E-field in V/m) in the middle plane of Indium phosphide (InP) substrate.
Figure 6 .
Figure 6.Simulation results of the back-to-back transition.(a) S-parameters.(b) Magnitude of electric field intensity (E-field in V/m) in the middle plane of Indium phosphide (InP) substrate.
Figure 7 .
Figure 7. Simulation results of the back-to-back transition with four backside vias.(a) Transition with four backside vias and electric field distribution.(b) Simulated S-parameters.
22 Figure 7 .
Figure 7. Simulation results of the back-to-back transition with four backside vias.(a) Transition with four backside vias and electric field distribution.(b) Simulated S-parameters.
Figure 8 .
Figure 8. Final design of the back-to-back transition.(a) Transition with six backside vias and electric field distribution at 318 GHz.(b) Simulated S-parameters.
22 Figure 8 .
Figure9ashows the photograph of the fabricated on-chip transition using the 250 nm InP HBT process from Teledyne Technologies (Thousand Oaks, CA, USA).The dipole antennas are fabricated in M3 and the microstrip signal line in M1, under the M3 ground plane.The transition substrate is mounted on the metallic pedestal using a conductive epoxy inside the WR-03 rectangular waveguide, which consists of two split-metallic blocks, as shown in Figure9b.The size of the module is 3.0 cm ×
Figure 10 .
Figure 10.Photograph of the H-band measurement set up.
Figure 10 .
Figure 10.Photograph of the H-band measurement set up.
Figure 10 .
Figure 10.Photograph of the H-band measurement set up.
Table 1 .
Dimensions of the designed transition (in µm).
Table 1 .
Dimensions of the designed transition (in μm). | 8,187.2 | 2018-10-05T00:00:00.000 | [
"Engineering",
"Physics"
] |
Equivalent Base Expansions in the Space of Cliffordian Functions
: Intensive research efforts have been dedicated to the extension and development of essential aspects that resulted in the theory of one complex variable for higher-dimensional spaces. Clifford analysis was created several decades ago to provide an elegant and powerful generalization of complex analyses. In this paper, first, we derive a new base of special monogenic polynomials (SMPs) in Fréchet–Cliffordian modules, named the equivalent base, and examine its convergence properties for several cases according to certain conditions applied to related constituent bases. Subsequently, we characterize its effectiveness in various convergence regions, such as closed balls, open balls, at the origin, and for all entire special monogenic functions (SMFs). Moreover, the upper and lower bounds of the order of the equivalent base are determined and proved to be attainable. This work improves and generalizes several existing results in the complex and Clifford context involving the convergence properties of the product and similar bases.
Introduction
The development of the theory of bases in Clifford analysis has indicated its growing relevance in various mathematics and mathematical physics fields. The concept of basic sets (bases) in one complex variable was initially discovered by Whittaker [1,2], and the effectiveness terminology was proposed. In this context, a significant contribution was made by Cannon [3,4], who proved the necessary and sufficient conditions for a base to possess a finite radius of regularity and to generate entire functions. In [5], Boas introduced several effectiveness criteria for entire functions.
Despite the fact that our current study has a theoretical framework, the theory of basic sets finds its utility in applications and, in particular, to solve differential equations for real-life phenomena, as indicated in [6][7][8]. Several approaches have been pursued in generalizing the theory of classical complex functions. Among these generalizations are the theory of several complex variables and the matrix approach [9][10][11]. The crucial development of the hypercomplex theory derived from higher-dimensional analysis involving Clifford algebra is called Clifford analysis. In the last decades, Clifford analysis has proved to have substantial influence as an elegant and powerful extension of the theory of holomorphic functions in one complex variable to the Euclidean space of more than two dimensions. The theory of monogenic functions created a solution for a Dirac equation or s generalized Cauchy-Riemann system, both of which are related to Riesz systems [12]. In a complex setting, holomorphic functions can be described by their differentiability or series expansion for approximations. Accordingly, exploring such representations of monogenic functions in higher-dimensional space is critical. Abul-Ez and Constales [13] initiated the equivalent bases is defined and constructed in Section 3. Section 4 details the effectiveness properties of the equivalent base. We study the effectiveness when the constituent bases are simple monic bases, simple bases with normalizing conditions, nonsimple bases with restrictions on the degree of the bases, or algebraic bases. The upper and lower bounds of the order of the equivalent base are determined and proved attainable in Section 5. Section 6 deals with the T ρ property of the equivalent base of SMPs in open balls. We conclude the paper by summarizing the results and suggesting open problems for further study.
Preliminaries
This section collects several notations and results for Clifford analyses and functional analyses, which are essential throughout the paper. More details can be found in [13,15,29] and the references therein.
The real Clifford algebra A m is a real algebra of dimension 2 m , which is freely generated by the orthogonal basis (e 0 , e 1 , . . . , e m ) in R m+1 according to the non-commutativity property e i e j + e j e i = −2δ ij , where e 0 = 1 for 1 ≤ i = j ≤ m (for details on the main concepts of A m , see [30]). The space R m+1 1 is embedded in A m . Let x ∈ A m ; then, Rex refers to the real part of x, which represents the e 0 component of x and Im x := x − (Re x)e 0 . The conjugate of x isx, whereē 0 = e 0 andē i = −e i for 1 ≤ i ≤ m. The relationship xy =ȳx holds for all x, y ∈ A m . Note that A m is equipped with the Euclidean norm | x | 2 := Re (xx).
is the generalized Cauchy-Riemann operator. Furthermore, a polynomial P(x) is specially monogenic if and only if DP(x) = 0 (so P(x) is monogenic) and there exists a i,j ∈ A m , for which x i x j a i,j .
Definition 1.
Suppose that Ω is a connected open subset of R m+1 containing 0 and f is monogenic in Ω. Then, f is called special monogenic in Ω if and only if its Taylor series near zero (which exists) has the form f (x) = ∞ ∑ n=0 P n (x)a n for certain SMPs, specifically P n (x) and a n ∈ A m .
The space of all SMPs denoted by
where P n (x) was defined by Abul-Ez and Constales [13] in the form is the Pochhamer symbol. Observe that R m+1 is identified with a subset of A m . Let P n (x) be a homogeneous SMP of degree n in x and P n (x) = P n (x) α, where α ∈ A m is a Clifford constant (see [13]). Consequently, we obtain Now, we state the definition of a Fréchet module (F-module) as follows.
Definition 2.
An F-module E over A m satisfies the following properties: (i) E is a Hausdorff space, (ii) E is a topology induced by a countable set of a proper system of semi-norms P = { . k } k≥0 such that k < l ⇒ g k ≤ g l ; (g ∈ E). This implies that V ⊂ E is open if and only if for all g ∈ V, there exists > 0, iii) E is complete with respect to a countable set of a proper system of semi-norms.
Remark 1. In the following Table 1, each indicated space represents an F-module depending on the countable set of a proper system of associated semi-norms.
The space of SMFs in the closed ball B(R) The space of entire SMFs on the whole of R m+1 g n = sup B(n) |g(x)|, x ∈ R m+1 , n < ∞ ∀g ∈ H [∞] H [0 + ] The space of SMFs at the origin
Definition 4.
A sequence {P n (x)} of an F-module E is said to form a base if P n (x) admits a right A m -unique representation of the form The Clifford matrixP = (P n,k ) is the operator's matrix of the base {P n (x)}. The base {P n (x)} can be written as follows: The Clifford matrix P = (P n,k ) is called the coefficient matrix of the base {P n (x)}. According to [13], the set {P n (x)} will be a base if and only if where I denotes the unit matrix.
Let g(x) = ∞ ∑ n=0 P n (x) a n (g) be any SMF of an F-module E. Substituting for P n (x) from (2), we obtain the basic series where Π n (g) = ∞ ∑ k=0P k,n a k (g) . Results concerning the study of the effectiveness properties of bases in the F-modules E were presented in [15]. We can write ω n (R) = ∑ k P kPn,k R , where Then, the convergence properties of a base are totally determined by the value of where ω n (R) is the Cannon sum and λ(R) is the Cannon function.
Theorem 1.
A necessary and sufficient condition for a base {P n (x)} to be The Cauchy inequality for the base in (3) is defined as [15] |P n,k | ≤ P n R R k .
Definition 6. When {P n (x)} is a base of polynomials, then Representation (2) is finite. If the number of non-zero terms N(n) in (2) is such that the base {P n (x)} is called a Cannon base of polynomials. Moreover, when lim sup n→∞ {N(n)} 1 n = a > 1, then the base {P n (x)} is said to be a general base.
Definition 7.
A base {P n (x)} of polynomials is called a simple base if the polynomial P n (x) is of degree n. A simple base is called a simple monic base if P n,n = 1 ∀ n ∈ N. Definition 8. The order of a base {P n (x} in a Clifford setting was defined in [13,14] by Determining the order of a base allows us to realize that if the base {P n (x)} has a finite order, ω, then it represents every complete SMF of an order less than 1 ω in any finite ball.
Equivalent Bases of SMPs
Employing the definition of the product base of polynomials in the context of the Clifford analysis introduced in [19], the equivalent base of SMPs can be defined as follows.
According to (13), we can write SupposeẼ is a matrix given byP (1)P(2) P (3) . It can be easily observed that where I is the unit matrix. Thus, the matrixẼ is a unique inverse of E. This implies that the set {E n (x)} is indeed a base.
Effectiveness with Simple Monic Constituents
We begin by considering the three bases {P n (x)}, where = 1, 2, 3, as simple monic bases to attain the following result. n (x)} is effective in the same space.
Proof. Suppose that the three bases
Owing to [19,21], it follows directly that the base Conversely, suppose that the bases {P . Using Equation (15), as we mentioned previously, we deduce that the base
Effectiveness with Boas Conditions
In the following, we consider the case for which each base of the constituent bases {P ( ) n (x)}, where = 1, 2, 3, of the equivalent base has the Boas conditions [31] in the form where a and M are any finite positive numbers. Proof. Using the product P ( )P( ) = I, where P ( ) denotes the matrix of coefficients of the base {P (x)},P ( ) is its inverse, and I is the unit matrix, it follows that (17) can be written in the formP Owing to (16) and (18), we obtain Using (14), (16) and (19), we have Employing the relationships (16), (19), and (20) in the Cannon sum of {E n (x)} leads to for r ≥ max{a (1 + M ), = 1, 2, 3}. According to [15,16], the equivalent base is effective for H [B(r)] , as desired.
Effectiveness of Simple Bases with Normalizing Conditions
In this subsection, we study the convergence properties of the equivalent base whose constituent bases {P n (x)}, where = 1, 2, 3, are simple bases for which the diagonal coefficients satisfy Halim's condition [25] lim n→∞ |P ( ) For the sake of shortening notations, we write We will use K to denote a constant that needs not be the same as it is used. Proof. Since the three bases {P n (x)}, where = 1, 2, 3, satisfy the condition lim n→∞ |P ( ) n,n | 1 n = 1, it follows that for all n ∈ N, the following relationship holds: Moreover, where P for all R ≥ r (see [25]), which implies that Hence, for an increasing sequence r j+1 > r j > r, j = 1, 2, . . . 7, it follows that Since P ( ) it follows that P ( ) Thus, by applying Cauchy's inequality as stated in (10), we obtain We set k = n in (28) to obtain Then, in view of (22) and the condition (i), we have Putting j = k in (28) implies that Thus, using (23) and the condition in (i) again, we can write Now, relying on the relationships (14), (30) and (32), one can obtain Using the relationships (30), (32) and (33), the Cannon sum Ω n (r 1 ) of the equivalent base satisfies Therefore, the Cannon function of the equivalent base {E n (x)} is Since r 7 can be chosen arbitrarily close to r, it follows that λ E (r) ≤ r; however, it is proved in [15,16] that λ E (r) ≥ r. This implies that λ E (r) = r, which means that the equivalent base {E n (x)} is effective for H [B(r)] .
Next, we consider non-simple bases for which there are some restrictions on the degree of the bases. Let d Thus, there exist positive numbers α and β such that Furthermore, suppose the bases {P ( ) n (x)} satisfy the following equality, which is recognized as Newns' condition [32]: Obeying these conditions, we can state and prove the following result.
Taking the n-th root and making n tend to infinity, the Cannon function of the equivalent base {E n (x)} satisfies that Since r 12 can be arbitrarily chosen near to r (36), we conclude that λ E (r) ≤ r, but λ E (r) ≥ r; then, by applying Theorem 1, we obtain that λ E (r) = r, which means that {E n (x)} is indeed effective for H [B(r)] .
Effectiveness with Algebraic Property
In the following case, the bases {P ( ) n (x)} are considered to be algebraic, satisfying the conditions [22] µ (r + ) ≤ r, = 1, 2, 3 where For this consideration, we first provide the following result. n (x)} is algebraic according to [22], the matrices of coefficients P ( ) and their powers (P ( ) ) (t) , where t = 1, 2, . . . , N < ∞, satisfy the following relationship:P ( ) where γ t are constants. Using Equation (41) and Theorem 1 in [22], we obtain From (44)-(46), and by using Cauchy's inequality, we obtain We can take the upper limit as n → ∞ and make r 7 → r + imply that µ(r + ) ≤ r, which means that the equivalent base {E n (x)} satisfies Equation (42) whenever the three constituent bases are algebraic. Therefore, the lemma is established.
The effectiveness of the equivalent bases of polynomials for H [B + (r)] holds without any restrictions on the constituent bases to be effective in the same space as indicated in the following result. from which we can deduce as before that λ E (r 1 ) ≤ r 13 . By taking r 13 → r + , we obtain λ E (r + ) ≤ r, but λ E (r + ) ≥ r. Therefore, λ E (r + ) = r, which implies that the equivalent base Now, letting r → 0 in Theorem 6, Equation (42) will be replaced by the equation Thus, the following result follows.
We can similarly proceed as in the proof of Theorem 6 to conclude the following. Now, by letting R → ∞ exist in Theorem 7, Equation (48) will be replaced by Consequently, the effectiveness of the equivalent base for the space of a complete special function, H [∞] , is established as follows.
The Order of the Equivalent Base
In this section, we determine the order ρ of the equivalent base {E n (x)} in relation to the orders ρ where = 1, 2, 3 of the constituent bases {P ( ) This relationship is formulated in the following.
n (x)} be a simple monic base of polynomials of the receptive order ρ , where = 1, 2, 3. Then, the order of the equivalent base {E n (x)} satisfies the inequality and these bounds are attainable.
Proof. Since the three bases {P
( ) n (x)} are simple monic bases of the orders ρ , = 1, 2, 3, then Equation (50) yields and By multiplying P (1) s,k and using Cauchy's inequality (see [13]), it follows that Owing to Equations (40) and (52)-(54), the Cannon sum Ω n (r) of the equivalent base satisfies Since σ can be chosen as near as possible to ρ , where = 1, 2, 3, an upper bound of the order ρ of the equivalent base {E n (x)} is given by Now, we estimate the lower bound of the order of the equivalent base. According to Theorem 3 in [21], the orderρ 1 of the inverse base {P (1) n (x)} is Using Equations (15) and (56), it follows that Therefore, n (x) = P n (x) + α n P n−1 (x), n is odd P n (x), n is even P (2) n (x) = P n (x) + β n P n−1 (x), n is even P n (x), n is odd and P n (x) = P n (x) + γ n P n−1 (x), n is odd P n (x), n is even where α n = n αn , β n = n βn , and γ n = n γn .
Therefore, the order ρ of the equivalent base is given by log Ω n (r) n log n = α + 2β + 2γ.
We can proceed in a similar procedure as in Example 1 to prove that the orders of the bases {P (1) n (x)}, {P (2) n (x)}, and {P (3) n (x)} are α, β, and γ, respectively. In this case, the order of the equivalent set is ρ = 1 2 (β − 2α − 2γ), as required.
The T ρ Property of the Equivalent Base of SMPs
In this section, we construct the T ρ property of equivalent bases of special monogenic polynomials in the open ball B(R). First, we recall the definition of the T ρ property as given in [27], as follows. Let ω(r) = lim sup n→∞ log ω n (r) n log n .
The restriction placed on the base {P n (x)} of SMPs to satisfy the T ρ property in the open ball B(R) [27] is stated as follows.
Theorem 9. Let {P n (x)} be a base of special monogenic polynomials and suppose that the function f (x) is an entire SMF of an order less than ρ. Then, the necessary and sufficient conditions for the base {P n (x)} to have the property T ρ in B(R) are ω(r) ≤ 1 ρ ∀r < R.
In this regard, we state and prove the following result.
Conclusions and Future Work
This paper employs the definition of the product base of SMPs to construct a new base called the equivalent base in Fréchet modules in the Clifford setting. The convergence properties of the derived base were treated for different classes of bases. Within this study, we indicate which type of restrictions we should consider on the coefficients to justify the effectiveness properties of the equivalent base in various regions of convergence, such as open balls, closed balls, at the origin, and for all entire SMFs. Furthermore, given the orders of the constituent bases, we determined the lower and upper bounds of the order of the equivalent base. Moreover, the T ρ property of the equivalent base is determined in the case of simple monic bases, which are promising for characterizing this property for more general bases.
Looking back to our constructed base, n (x)}{P (2) n (x)}{P (1) n (x)} and by taking {P n (x)}, a similar base {S n (x)} can be considered a special case of the equivalent base {E n (x)}, reflecting that the results in the current study generalize the corresponding results in [33].
This study encourages the provision of answers to other open problems regarding the representations of entire functions in several complex variables. We believe that the results in this study are likely to hold in the setting of several complex matrices in different convergence regions, such as hyperspherical, polycylindrical, and hyperelliptical regions.
Recently, the authors of [18] proved that the Bessel special monogenic polynomials are effective for the space H [B(r)] , and the authors of [24] proved that the Chebychey polynomials is effective for the space H [B(1)] . The Bernoulli special monogenic polynomials are proved to have an order of 1 and a type 1 2π , while the Euler special monogenic polynomials have an order of 1 and a type 1 π (see [23]). Demonstrating how the convergence properties involve the effectiveness, order, and type of the different constructed bases we have mentioned above, as well as the corresponding aspects of the original bases and, in particular, the well-known special polynomial bases, is one of the most challenging subjects to explore. The proposed methodological weakness is that the work lacks practical application. However, in upcoming research, it will be interesting to study concrete applications of mathematical physics problems, such as Legendre polynomials and their relation to solutions of the Dirac equation and its other formulation as the spinor functions, as well as in curved space-time, which has many applications in quantum mechanics. | 4,951.8 | 2023-05-31T00:00:00.000 | [
"Mathematics"
] |
Towards Next Generation Teaching, Learning, and Context-Aware Applications for Higher Education: A Review on Blockchain, IoT, Fog and Edge Computing Enabled Smart Campuses and Universities
: Smart campuses and smart universities make use of IT infrastructure that is similar to the one required by smart cities, which take advantage of Internet of Things (IoT) and cloud computing solutions to monitor and actuate on the multiple systems of a university. As a consequence, smart campuses and universities need to provide connectivity to IoT nodes and gateways, and deploy architectures that allow for offering not only a good communications range through the latest wireless and wired technologies, but also reduced energy consumption to maximize IoT node battery life. In addition, such architectures have to consider the use of technologies like blockchain, which are able to deliver accountability, transparency, cyber-security and redundancy to the processes and data managed by a university. This article reviews the state of the start on the application of the latest key technologies for the development of smart campuses and universities. After defining the essential characteristics of a smart campus/university, the latest communications architectures and technologies are detailed and the most relevant smart campus deployments are analyzed. Moreover, the use of blockchain in higher education applications is studied. Therefore, this article provides useful guidelines to the university planners, IoT vendors and developers that will be responsible for creating the next generation of smart campuses and universities.
Introduction
Smart campuses and universities require an IT infrastructure similar to the one needed by smart cities, smart buildings, or smart homes, which make use of Internet of Things (IoT) solutions [1][2][3][4] to interact with the sensor and actuation systems of a university.Similarly to smart cities, but in contrast to most smart homes and buildings, smart campuses must provide long-distance communications, as many university campuses cover an area that can reach thousands of square meters.For instance, the campus of the authors of this article (Campus of Elviña, University of A Coruña, Spain) occupies a 26,000 m 2 area.Nonetheless, such an area can be considered small when compared with the largest university campuses in the world: Berry College (Floyd County, Georgia, United States) covers 109.26km 2 of land [5], Duke University campuses (Durham, North Carolina, United States) are deployed on 37.83 km 2 [6], and the campus of Stanford University (Stanford, California, United States) occupies 8180 acres (33 km 2 ) [7].These figures mean that smart campuses should make use of specific long-distance communications infrastructure that provides indoor and outdoor connectivity to the deployed IoT gateways, sensors and actuators, while guaranteeing reduced energy consumption and thus optimized IoT node battery life.
A smart campus and university is managed according to its strategic plan (a framework of its main priorities and commitments).As a result, universities devote financial resources to implement a broad range of actions (e.g., related to digital transformation, services, applications, events, facilities, human resources, governance, educational programs, or innovation) that are fundamentally designed to meet their institutional objectives.The strategic plan is driven by the university mission, vision, and core values (further detailed in Section 2).For instance, a number of universities around the world has committed to make tangible contributions to the United Nations Sustainable Development Goals (SDGs).Note that the provided smart campus services may be similar to the ones of a smart city, but adapted to the needs of a university (e.g., mobility/transport services, energy/grid monitoring, resource consumption efficiency, user behavior monitoring, or guidance applications).However, there are other services that are specific for a university environment, such as the services for analyzing the behavior of students in certain outdoor activities or their attendance to lectures.
In addition, there are other relevant differences with respect to smart cities regarding the architectures and technologies that can be applied to smart campuses and universities:
•
Smaller size.Although, as it has been previously mentioned, there are really large campuses, most of them are not as large as cities and, in fact, there are many urban universities with buildings inside a city.Such a smaller size enables using certain communications technologies that do not need to reach very long distances.In addition, since often less devices require to be deployed than in a smart city, architectures can be less complex and need less routing layer devices, what usually reduces response time and infrastructure deployment cost.
•
Infrastructure management.Frequently, in a smart campus/university all buildings and related infrastructure are managed by the same organization (i.e., the university), often making it easier to take certain measures to ease the deployment of the required infrastructure communications and architecture than in a smart city.In contrast, in a city most space is occupied with private buildings that are managed by people that do not work for the city council, so certain infrastructure deployments can be difficult when having to deal with the different necessities of multiple people.
•
Homogeneity.A smart university can enforce the use of certain technologies and specific architectures, while a smart city in general will have to deal with a greater heterogeneity in such areas, which usually require complex solutions to integrate the numerous previously existing computational systems.
Due to the previously mentioned differences, it is essential to study specifically the most appropriate architectures and technologies for developing smart campuses and universities.
In contrast to previous reviews and surveys on smart campuses/universities, which are presented as systematic literature reviews [8,9], are focused on defining certain generic concepts/applications [10], or are centered on specific technologies [11][12][13][14][15][16], this article provides a holistic review that analyzes the application of the latest key technologies and architectures for the development of smart campuses and universities, including the following contributions, which have not been found together in the previous literature.
•
After defining the essential characteristics of a smart campus and a smart university, the most common communications architectures are detailed together with their evolution: from traditional cloud-based to the latest ones based on edge computing.
•
The application of blockchain to such architectures is studied as a tool to create a distributed immutable log that provides transparency and cybersecurity to higher education and smart campus applications.
•
The characteristics of the most relevant smart campus deployments and initiatives are analyzed.
•
The latest smart campus deployments are enumerated and multiple examples of their applications are described.
•
The most recent communications technologies for outdoor and indoor smart campus applications are studied.
•
The main challenges that will be faced by university planners, IoT vendors, and developers are listed.
The rest of this paper is structured as follows.Section 2 defines the concept of smart campus and its main features.Section 3 details the latest communications architectures for smart campuses and smart universities.Section 4 analyzes the potential applications of blockchain for deploying higher education and smart campus applications.Section 5 describes the most relevant smart campuses that have been already deployed, emphasizing their main applications and communications technologies.Finally, Section 6 indicates the main challenges for university planners, IoT vendors, and developers, and Section 7 is devoted to the conclusions.
Definitions of Smart Campus and Smart University
It must be first clarified that the term "smart campus" has been used in the past to refer to digital online platforms that manage university content [17,18] or to the set of techniques aimed at increasing university student smartness [19][20][21].However, in this article, the concept of smart campus refers to the hardware and software required to provide advanced intelligent context-aware services and applications to university students and staff.In addition, the term smart university refers to the hardware and software used to develop tools to fulfill the key dimensions of the university mission:
•
Improve the teaching, learning, and assessment processes involved in higher education.
•
Foster research and innovation.
•
Empower community-based knowledge transfer and a shared vision among the various university stakeholders (e.g., teachers, students, administration, non-profit organizations, research institutions, citizens, industries, and governments).
Such characteristics make smart campus and universities unique and enable differentiating from other concepts like smart cities.Nonetheless, smart campuses/universities are similar to smart cities in the way they are organized, which revolves around six smart areas [22]:
•
Smart governance.It allows university staff and students to take part in different decisions that need to be made on a university or on a specific campus.
•
Smart people.It is related to the engagement of the university users in teaching and learning processes or their attendance to certain events.
•
Smart mobility.In the case of a smart campus, this field deals with the different issues related to the available transport systems, which should be efficient, green, safe, and may provide intelligent services.
•
Smart environment.This field is related to smart solutions able to monitor, protect, and actuate on the environment while also managing the available resources in a sustainable way.For instance, smart environment systems provide solutions for monitoring waste, water consumption, or air quality.In addition, this field is usually related to the deployment of systems to control and monitor the energy consumed, generated and distributed throughout a campus.
•
Smart living.It is responsible for monitoring the multiple living factors involved in the daily campus activities, including the ones related to health, safety or user behavior.Thus, smart living services can perform the following [23,24]: -Estimate room occupation and determine student classroom attendance.
-Control the access to classroom/lab equipment.-Provide teaching interaction services and context-aware applications.
• Smart economy.This smart field deals with the productivity of a campus in relation to concepts like entrepreneurship or innovation.
As a summary, Figure 1 illustrates the main fields and technologies related to the deployment of a smart campus/university.The inner circle contains the six previously mentioned smart fields.The contiguous outer circle references some of the most relevant technologies required to provide solutions for such six smart fields, including IoT, Augmented Reality (AR), Cyber-Physical Systems (CPSs), or UAVs (Unmanned Aerial Vehicles).Note that some of such technologies are the same as the ones proposed by Industry 4.0 [25], so commercial and industrial deployments are already available in other fields outside smart campuses [26,27].Moreover, there are also vertical fields like cybersecurity that affect several of the cited technologies, since their contribution is key to avoid potential issues [28].Finally, the most external circle of Figure 1 indicates specific smart areas that are usually involved in the daily activities carried out in a smart campus/university.For example, smart plug-and-play objects [29] may be involved in many university activities, whereas certain environmental sensors are essential for actuating on smart buildings [30].There are also other fields, like smart agriculture, that may be specific for smart campuses that include in their premises areas for growing certain crops that
Smart Campus/University Communications Architectures
Several authors have previously proposed different smart campus architectures, but it can be stated that, in general, they can be classified as Service-Oriented Architecture (SOA) architectures [32,33] that revolve around two main paradigms (IoT and cloud computing [34]), which are usually helped by Big Data when processing and analyzing the collected information.
An example of smart campus architecture based on cloud computing is presented in [35], where the authors deployed a smart campus platform in three months by using Commercial Off-The-Shelf (COTS) hardware and Microsoft Azure cloud services.With respect to IoT, its use has been proposed for easing the deployment of architectures that allow for implementing learning, access control, or resource water management applications [10,36].
Some researchers have also proposed alternative paradigms for creating smart campuses.For instance, the authors of [37] propose an opportunistic communications architecture that allows for sharing data through infrastructure-less services.The main novelty behind the proposed architecture is the concept of Floating Content node, which is a computing device that produces data that can be shared among users located in nearby areas.Similar architectures have been proposed, but they have been focused on improving certain aspects like security [38].
Some of the latest smart campus architectures have suggested the use of the different types of the edge computing paradigm (e.g., mobile edge computing or fog computing), which have already been successfully applied to other smart fields [39].The main advantage of edge computing is its ability to offload part of the processing tasks from the cloud, delegating such tasks to the so-called edge devices, which are physically located close to the IoT nodes.Thanks to this approach, the amount of communications transactions with the cloud and latency response are reduced, while also being able to provide location-aware services [40,41].
For example, the authors of [42] make use of edge computing devices to improve their smart campus architecture.Such devices are focused on delivering services related to content caching and bandwidth allocation.A similar approach is presented in [43], where the researchers provide smart campus services through edge computing devices embedded into street lights.In the case of the work detailed in [44], the authors propose a smart campus platform called WiCloud that is based in mobile edge computing, which allows for accessing the platform servers through mobile phone base stations or wireless access points.Finally, it is worth mentioning the smart campus system presented in [45], which makes use of fog computing nodes to enhance user experience.
To clarify the previously mentioned architectures, Figure 2 illustrates their evolution.In this figure, at the top, the traditional cloud-based architecture is depicted, which is composed by two main layers:
•
Node layer: it consists of multiple IoT nodes and computing devices, whose data are collected through IoT gateways and routers in order to send them to the cloud where they are stored.
•
Cloud layer: it is essentially a central server or a group of servers where the main processing tasks are carried out.In addition, the cloud allows for interacting with third parties, it presents the stored data to remote users and enables interconnecting the multiple IoT networks that may be scattered through different physical locations.
The architecture depicted at the bottom of Figure 2 on the left represents a fog computing architecture.In this case, besides the node layer and the cloud, there is a third layer (the fog layer), which is made of fog devices.Such devices provide different local fog services to the IoT nodes and are also able to exchange data among them to collaborate in certain tasks.A fog computing device is usually implemented on a Single-Board Computer (SBC), which is essentially a reduced-size low-cost computer (e.g., Raspberry Pi and Orange Pi PC) that can be easily deployed in the campus facilities.
Finally, the third and more evolved architecture of Figure 2 is the one at the bottom, on the right, which illustrates a typical edge computing smart campus architecture.Such an architecture is basically an enhanced version of the previously mentioned fog computing-based architecture, but through its edge computing layer it provides more computing power, thanks to the use of cloudlets.A cloudlet is often a high-end computer that is able to perform compute-intensive tasks, like the ones related to complex data processing or image rendering.
Blockchain for Smart Campuses and Universities
Both academia [11][12][13] and public entities like the European Commission [46] have recently considered the improvement of the architectures described in the previous section by using Distributed Ledger Technologies (DLTs) like blockchain.Such a technology can be used for implementing higher education and smart campus applications, due to their ability to provide data exchanges among entities that do not necessarily trust each other [47].In addition, the use of blockchain enhance smart campus applications that need transparency, data immutability, privacy, and security.Furthermore, blockchain allows for developing Decentralized Apps (DApps) based on Peer-to-Peer (P2P) transactions whose processes can be automated through the use of smart contracts, which can execute pieces of code in an autonomous way [48].
Nowadays, there are many blockchain platforms, like Ethereum [49], Hyperledger Fabric [50], or the popular Bitcoin [51], that can be used in multiple practical applications [52][53][54][55].However, it is important to emphasize that blockchain is not the best technology for every application that needs to perform trustworthy data exchanges.For example, in many cases where smart campus applications are deployed in a private network, a traditional database is powerful enough and usually provides faster transactions than a blockchain.Therefore, to decide whether a blockchain is necessary, smart campus developers may use a decision framework [56] and, thus, detect certain necessary features, like the need for decentralization, transaction transparency, cybersecurity (e.g., data redundancy and protection against Denial-of-Service (DoS) attacks), or the lack of trust among entities (including respect to government agencies and banks).
Due to the previously mentioned benefits, different authors have proposed the use of blockchain and other DLTs for developing applications for smart campuses and universities.Specifically, blockchain has been suggested for guaranteeing education certificate authenticity [57,58], managing digital copyright information [59], verifying learning outcomes [60,61], or enhancing e-learning interaction [62].More potential smart campus applications and a deeper analysis on their characteristics can be found in [11][12][13]46].
Relevant Deployments
In the literature, there only a few papers that present descriptions on actual smart campuses.An example of such a paper is [63], where the authors detail the development of the smart campus of Toulouse III Paul Sabatier university (France).Specifically, the smart campus is called neOCampus and involves multiple projects able to run on an open data platform that, for instance, can use collaborative WiFi.Similarly, in [64], the authors describe a smart campus based on cloud computing, SOA, and IoT that has been deployed in the Moncloa Campus of International Excellence of the Universidad Politécnica de Madrid (Spain).In the mentioned article, two applications are detailed: one for monitoring diverse environmental parameters, and another for determining people flows inside the campus.A similar smart campus is described in [65], where the authors propose an IoT and cloud computing based architecture for the Wuhan University of Technology smart campus (China) that is aimed at supporting diverse applications.
IoT is also key in the West Texas A&M University smart campus [66], which is deployed in a 176-acre land that includes 42 different buildings.Such a smart campus is focused on providing IoT-related and secure services, and has tested systems for smart parking or environmental monitoring.Another interesting work can be found in [33], where it is detailed the Birmingham City University smart campus (United Kingdom).The aim of the proposed smart campus is essentially to create a scalable and flexible SOA architecture where service integration and orchestration can be carried out easily through the use of an Enterprise Service Bus (ESB).
Finally, it is worth mentioning the work presented in [67], where it is described from a theoretical point of view the smart campus of Sapienza (Rome, Italy), including the author's approach for providing services in a scalable way.
As a summary, Table 1 shows the main characteristics of the most relevant smart campus deployments, including details on their location, size, used hardware, and their explicit support for advanced architectures (fog and edge computing enabled architectures) and blockchain-enabled applications.
Smart Campus and University Applications
Both indoor and outdoor applications can be deployed in a smart campus/university [8], but such smart applications differ in their requirements.The most relevant difference is that, in indoor environments, IoT nodes can be usually powered through the electrical grid and can make use of fixed communications infrastructure (e.g., Ethernet, WiFi access points).In contrast, outdoors, IoT nodes usually depend on batteries and need to exchange data at relatively long distances (at least several hundred meters, up to 2 kilometers).The following are some of the most relevant applications for both scenarios:
•
Smart mobility and intelligent transport services.This kind of applications require outdoor communications coverage from traditional Wireless Local Area Networks (WLANs) or specific vehicular networks [71,72].For instance, researchers of Soochow University proposed to use in their campus an automatic gate access system and diverse services for smart parking, bus positioning or for renting bicycles [68].Similar services have been suggested by other universities for providing a smart parking service [73], electric mobility [74,75], smart electric charging [76], the use of autonomous vehicles [77], or for locating the campus buses [78,79].
•
Smart energy and smart grid monitoring.These applications are used for controlling and monitoring the generation, distribution, and consumption of the campus energy sources (e.g., photovoltaic systems or wind generators).Specifically, in the last years, multiple authors focused their research on the study of smart campus microgrids [80][81][82], smart grids [83,84], and smart energy systems [85,86].
•
Resource consumption efficiency.The use of the resources of a campus can be monitored through specific systems for garbage collection [87], water management [69], power consumption monitoring [88,89], and other solutions aimed at preserving sustainability [16].
•
Infrastructure and building control and monitoring.The state of certain buildings and assets that are scattered throughout the campus can be monitored and controlled remotely.For instance, solutions have been suggested for monitoring campus greenhouses [90], for controlling the Heating, Ventilation, and Air Conditioning (HVAC) systems of the campus buildings [91] or for automating critical infrastructure supervision through Unmanned Aerial Vehicles (UAVs) [92].
•
Green area monitoring.The health of the trees of campus can be monitored remotely through IoT sensor-based systems [93].
•
User pattern and behavior monitoring.The smart campus services and infrastructure can be optimized thanks to the analysis of the user patterns and behaviors.For instance, mobility patterns, user activities, or social interactions can be determined through smartphone apps [94,95], by monitoring WiFi communications [96,97] or by collecting data from smartphone sensors [98], wearables, or even garments [99].
•
Guidance and context-aware applications.These applications often depend on sensors and actuators spread across the campus and can help people by giving useful contextual information and indications on how to reach their destination.For instance, there is interesting research on guidance systems to aid hearing and visually impaired people [100] or for navigating the campus paths [101,102].There are also augmented reality applications that provide relevant contextual information on the campus or that are able to guide the users through it [103][104][105].
•
Classroom attendance.Different student monitoring systems have been proposed, which make use of IoT and artificial intelligence to control student classroom attendance [106] and their access to sport facilities [107].
•
Remote health monitoring.Some of the latest smart campus applications are aimed at monitoring the health of certain campus users in real-time [108] or at measuring student stress [109] and health consciousness [110].
• Smart card applications.Although smart cards have been used for a long time by universities [111], they can still provide useful services for a smart campus, like information retrieval, mobile payments, library usage, access control, or e-learning [112][113][114].Instead of a smart card, the latest developments suggested the use of the Near-Field Communications (NFC) interface of a smartphone to provide the mentioned smart campus applications [115].Due to the multiple potential applications of smart cards, some authors also proposed to mine the data collected from the student transactions to infer their behavior [116,117].
•
Teaching and Learning applications.The technologies embedded into a smart campus/university can also help students to learn through their mobile phones [118] or have an ubiquitous user-centered personalized learning and training experience with advanced analytics [119,120].These technologies also allow teachers to make use of specific learning services (e.g., online programming contests [121]), sophisticated online teaching platforms [122], and to implement novel teaching paradigms like Flipped Classroom [123] or amplification [124].
•
Research and innovation activities.Smart campus/university technologies can be used to encourage collaboration and cooperation among people (e.g., international networks of living labs).For instance, crowdsourcing can be used to collect data of people with different profiles (e.g., students, teachers, researchers, and administrative staff) and create large-scale datasets for further research and novel applications [125].
•
Community-based knowledge transfer applications.Smart campus and university technologies can be explored to benefit the global community [126], either by increasing their awareness about sustainability issues [127] or by making citizens actively involved as central players of smart environments [128].
•
Location-aware applications.In many situations the information given to smart campus users depends on their location.Such a location-aware data may include information on content, activities, projects, services, tools, knowledge, or events [129][130][131][132].
•
Security services.Smart campus managers can make use of the multiple sensors and recording devices to monitor the campus status and increase physical security through video surveillance [133] and location-aware applications [134].In addition, it is essential to protect the privacy of campus user data when making use of wireless communications [135] and preventing cyber-attacks [66].
Communications Technologies
In the past, researchers have used diverse technologies for connecting remote outdoor IoT nodes with smart campus platforms.Note that such technologies may differ a great deal from one scenario to another, as the distance to be covered and the kind obstacles found in the environment (especially, metallic objects [136]), severely condition signal propagation.
For instance, in [35], the authors propose the design of an IoT-based smart campus that makes use of BLE and ZigBee for providing short and medium-range communications.Nonetheless, note that ZigBee nodes can act as relays in a ZigBee mesh, so that the exchanged information can reach long distances [137].
WiFi (i.e., the IEEE 802.11 a/b/g/n/ac standards) is another popular technology that has already been suggested for providing indoor connectivity for smart campuses [64].Bluetooth beacons can be also used in smart campus applications [138,139], but they usually are restricted to indoor environments, as their outdoor use requires the deployment of dense networks whose management is complicated [140].
Due to the popularity of mobile phones, the main cell phone communications technologies (i.e., 2G/3G/4G) have been suggested for providing smart campus services [141,142].5G technologies are still being deployed worldwide, but their use has already been proposed due to their ability to provide fast communications and reduce response latency in smart campus applications [122].
Despite the good perspectives of 5G for the next decade, nowadays, Low-Power Wide Area Network (LPWAN) technologies are arguably one of the best alternatives for smart campus applications that require low-power and long-distance IoT communications [143].Some of the most popular LPWAN technologies are SigFox [144], LoRaWAN [145], and NB-IoT [146], existing other emerging technologies like Ingenu Random Phase Multiple Access (RPMA) [147], Weightless-P [148], or NB-Fi [149].
LoRaWAN defines the communication protocol and the system architecture for the network and uses LoRa as the physical layer.Although there are several recent works on the application of LoRa/LoRaWAN to multiple scenarios [150], only a few of them are focused on the deployment of smart campuses services [66,70,[151][152][153].For example, the authors of [152] analyze the indoor and outdoor performance of LoRaWAN on a French smart campus.In addition, other researchers [153] proposed a smart campus air quality system whose communications were carried out by LoRaWAN nodes.Another example can be found in [70], where the authors make use of a radio planning simulator to determine the optimal location of LoRaWaN nodes that provide smart campus services in outdoor applications.
There are also short-distance communications technologies that can be useful in smart campus applications.For instance, ANT+ transceivers are often embedded into chest straps to monitor performance and health in sports.Another example of popular short-distance communications technology is Radio Frequency IDentification (RFID), which is commonly used in university access control and payment systems [28].
Table 2 summarizes the main characteristics of the latest and most popular communications technologies for smart campus applications.Moreover, Table 3 compares the communications technologies of the most relevant smart campus solutions.Such a Table also indicates whether the provided references detail the network planning of the proposed solution and, as it can be observed, only a couple of works give details about it.Based on ray-tracing and physical optic near-to-far field
Future Challenges
Despite the evolution of smart campuses and universities in the last years thanks to the technological advances achieved in fields like IoT, cloud computing, and certain communications paradigms, future university planners, IoT vendors, and developers will still face relevant challenges in the following areas.
•
Scalability.As a campus can cover a large area, where a large number of users can request smart services, it is essential for applications to be easily scalable in order to adapt its performance to the number of simultaneous users.
•
Service flexibility.A smart campus/university should be able to provide multiple services, which may differ depending on the physical area where they are provided (e.g., depending on the faculty), on the specific user that request them (e.g., access privileges may differ between a student and a professor), or on the specific goal (e.g., advancing to more effective teaching and learning services may differ substantially from the design of smart environment applications).
•
Long-distance low-power communications.Since campuses usually cover areas of thousands of square meters that often involve monitoring outdoor smart IoT objects (e.g., street lights, irrigation systems), it is key to consider in the smart campus architecture the use of long-distance wireless communications technologies whose energy consumption should be as low as possible to maximize IoT node battery life.• New communications technologies.Although this article has analyzed the currently most relevant communications technologies, smart campus designers should be aware of the latest advances on communications in order to include them in the designed architecture.For instance, some authors are already suggesting potential applications for 6G technologies [155,156].
•
Blockchain integration.DLTs like blockchain can be really useful to guarantee operational efficiency, data transparency, authenticity, and security.This aspect is a key enabler to develop novel decentralized smart applications (i.e., DApps) and to leverage new artificial intelligence paradigms such as big data, machine learning, or deep learning.These paradigms need to rely on trustworthy datasets in order to reach their full potential and produce new data model-based applications.Nevertheless, smart campus designers have to use blockchain with caution, considering their advantages and disadvantages.In addition, the incorrect use of smart contracts can be a problem, since they are able to trigger certain automatic behaviors that can have serious economic or personal consequences.
•
Lack of smart campus standards and public initiatives.Although, in the last years, smart city initiatives have proliferated worldwide, there are only a few specifically related to smart campuses and universities.In addition, there is not a common framework for designing or deploying them, so future developers will have to keep compatibility and interoperability issues in mind.
•
Seamless integration of outdoor and indoor smart campus applications.Due to their communications needs, outdoor and indoor applications may differ in the underlying technologies, so it is necessary to design architectures and devices that allow for switching between communications transceivers.This means that, although the lower layers of the communications protocol may differ, the upper layers are compatible so that they are able to provide seamless communications among users, IoT objects, and the computing devices scattered throughout a campus.
Conclusions
This article examined how higher education can leverage the opportunities created by the latest and most relevant IT technologies.After analyzing the basics on smart campuses and universities, this work has focused on studying the potential of IoT, blockchain, and the most recent communications architectures and paradigms (e.g., fog/edge computing) for developing novel smart campus and smart university applications.In addition, the latest key deployments as well as their communications technologies have been detailed and analyzed.Finally, the main future challenges are listed in order to allow future university planners, IoT vendors, and developers to create a roadmap for the design and deployment of the next generation of smart campuses and universities.
Figure 1 .
Figure 1.Main fields and technologies of a smart campus.
Table 1 .
Characteristics of the most relevant smart campuses initiatives.
Table 2 .
Main characteristics of the latest and most popular communications technologies for smart campus applications.
Table 3 .
Communications technologies of the most relevant smart campuses solutions. | 7,259 | 2019-10-23T00:00:00.000 | [
"Computer Science",
"Education",
"Engineering",
"Environmental Science"
] |
Identifying Chaotic FitzHugh – Nagumo Neurons Using Compressive Sensing
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights.
Introduction
In this paper, we address the problem of the data-based identification of a subset of chaotic elements embedded in a network of nonlinear oscillators.In particular, given such a network, we assume that time series can be measured from each oscillator.The oscillators, when isolated, are not identical in that their parameters are different, so dynamically, they can be in distinct regimes.For example, all oscillators can be described by differential equations of the same mathematical form, but with different parameters.Consider the situation where only a small subset of the oscillators are chaotic and the remaining oscillators are in dynamical regimes of regular oscillations.Due to mutual couplings among the oscillators, the measured time series from most oscillators would appear random.The challenge is to identify the small subset of originally ("truly") chaotic oscillators.
The problem of identifying chaotic elements from a network of coupled oscillators arises in biological systems and biomedical applications.For example, consider a network of coupled neurons that exhibit regular oscillations in a normal state.In such a state, the parameters of each isolated neuron are in the regular regime.Under external perturbations or slow environmental influences, the parameters of some neurons can drift into the chaotic regime.When this occurs, the whole network would appear to behave chaotically, which may correspond to a certain disease.The virtue of nonlinearity stipulates that the irregular oscillations at the network level can emerge even if only a few oscillators have gone "bad".It is thus desirable to be able to pin down the origin of the ill-behaved oscillators-the few chaotic neurons among a large number of healthy ones.
One might attempt to use the traditional approach of time-delayed coordinate embedding to reconstruct the phase space of the underlying dynamical system [1][2][3] and then to compute the Lyapunov exponents [4,5].However, since we are dealing with a network of nonlinear oscillators, the phase-space dimension is high and an estimate of the largest Lyapunov exponent would only indicate if the whole coupled system is chaotic or nonchaotic, depending on the sign of the estimated exponent.In principle, using time series from any specific oscillator(s) would give qualitatively the same result.Thus, the traditional approach cannot give an answer as to which oscillators are chaotic when isolated.
There were previous efforts in nonlinear systems identification and parameter estimation for coupled oscillators and spatiotemporal systems, such as the auto-synchronization method [6].There were also works on revealing the connection patterns of networks.For example, a methodology was proposed to estimate the network topology controlled by feedback or delayed feedback [7][8][9].Network connectivity can be reconstructed from the collective dynamical trajectories using response dynamics, as well [10,11].In addition, the approach of random phase resetting was introduced to reconstruct the details of the network structure [12].For neuronal systems, there was a statistical method to track the structural changes [13,14].While many of these previous methods require complete or partial information about the dynamical equations of the isolated nodes and their coupling functions, completely data-driven and model-free methods exist.For example, the network structure can be obtained by calculating the causal influences among the time series based on, for example, the Granger causality method [15,16], the transfer-entropy method [17] or the method of inner composition alignment [18].However, such causality-based methods are unable to reveal information about the dynamical equations of the isolated nodes.There were regression-based methods [19] for systems identification based on, for example, the least-squares approximation through the Kronecker-product representation [20], which would require large amounts of data.(Due to the L 1 nature of compressive sensing [21][22][23][24][25], the data requirement in our method can be significantly relaxed.)The unique features of our method are: (1) it is completely data driven; (2) it can give an accurate estimate of all system parameters; (3) it can lead to faithful reconstruction of the full network structure, even for large networks; and (4) it requires a minimal data amount.While some of these features are shared by previous methods, no single previous method possesses all of these features.
Here, we develop a method to address the problem of identifying a subset of ill-behaved chaotic elements from a network of nonlinear oscillators, the majority of them being regular.The basic mathematical framework underlying our method is compressive sensing (CS), a paradigm for high-fidelity signal reconstruction using only sparse data [21][22][23][24][25].The CS paradigm was originally developed to solve the problem of transmitting extremely large data sets, such as those collected from large-scale sensor arrays.Because of the extremely high dimensionality, direct transmission of such data sets would require a very broad bandwidth.However, there are many applications in which the data sets are sparse.To be concrete, say a data set of N points is represented by an N × 1 vector, X, where N is a very large integer.Then, X being sparse means that most of its entries are zero and only a small number of k entries are non-zero, where k N .One can use a random matrix Ψ of dimension M × N to obtain an M × 1 vector Y: Y = Ψ • X, where M ∼ k.Because the dimension of Y is much lower than that of the original vector X, transmitting Y would require a much smaller bandwidth, provided that X can be reconstructed at the other end of the communication channel.Under the constraint that the vector to be reconstructed is sparse, the feasibility of faithful reconstruction is guaranteed mathematically by the CS paradigm [21][22][23][24][25].In the past decade, CS has been exploited in a large variety of applications, ranging from optical image processing [26] and reconstruction of nonlinear dynamical and complex systems [27,28] to quantum measurements [29].
It has been shown in a series of recent papers [27,28,[30][31][32][33] that the detailed equations and parameters of nonlinear dynamical systems and complex networks can be accurately reconstructed from short time series using the CS paradigm.Here, we extend this approach to a network of coupled, mixed nonchaotic and chaotic neurons.We demonstrate that, by formulating the reconstruction task as a CS problem, the system equations and coupling functions, as well as all of the parameters can be obtained accurately from sparse time series.Using the reconstructed system equations and parameters for each and every neuron in the network and setting all of the coupling parameters to zero, a routine calculation of the largest Lyapunov exponent can unequivocally distinguish the chaotic neurons from the nonchaotic ones.
We remark on the generality of our compressive sensing-based method.Insofar as time series from all dynamical variables of the system are available and a suitable mathematical base can be found in which the nodal and coupling functions can be expanded in terms of a sparse number of terms, the whole system, including all individual nodal dynamics, can be accurately reconstructed.With the reconstructed individual nodal equations, chaotic neurons can be identified through routine calculation of the largest Lyapunov exponent.
Methods
Figure 1a shows schematically a representative coupled neuronal network.Consider a pair of neurons, one chaotic and another nonchaotic when isolated (say 1 and 10, respectively).When they are placed in a network, due to coupling, the time series collected from both will appear random and qualitatively similar, as shown in Figure 1b,c.It is visually quite difficult to distinguish the time series and to ascertain which node is originally chaotic and which is regular.The difficulty is compounded by the fact that the detailed coupling scheme is not known a priori.Say that the chaotic behavior leads to the undesirable function of the network and is to be suppressed.A viable and efficient method is to apply small pinning controls [34][35][36][37] to the relatively few chaotic neurons to drive them into some regular regime.(Here, we assume the network is such that, when all neurons are regular, the collective dynamics is regular.That is, we exclude the uncommon, but not unlikely, situation that a network system of coupled regular oscillators would exhibit chaotic behaviors.)Accurate identification of the chaotic neurons is thus key to implementing the pinning control strategy.(b,c) Dynamical trajectories of two neurons from the coupled system, one being chaotic when isolated and another regular, respectively.The trajectories give little hint as to which one is originally chaotic and which one is regular, due to the coupling.Specifically, Neuron 1 is originally chaotic (by setting parameter a = 0.42 in the FHN equation), while all other neurons are regular (their values of the corresponding parameter in the FHN equation are chosen uniformly from the interval [0.43, 0.45]).
Given a neuronal network, our aim is thus to locate all neurons that are originally chaotic and neurons that are potentially likely to enter into a chaotic regime when they are isolated from the other neurons or when the couplings among the neurons are weakened.Our approach consists of two steps.Firstly, we employ the CS framework to estimate, from measured time series only, the parameters in the FHN equation for each neuron, as well as the network topology and various coupling functions and weights.
As will be shown below, this can be done by expanding the nodal dynamical equations and the coupling functions into some suitable mathematical base, as determined by the specific knowledge of the actual neuronal dynamical system, and then casting the problem into that of determining the sparse coefficients associated with various terms in the expansion.The nonlinear system identification problem can then be solved using some standard CS algorithm.Secondly, we set all coupling parameters to zero and analyze the dynamical behaviors of each and every individual neuron by calculating the Lyapunov exponents.Those with positive largest exponent are identified as chaotic.
A typical time series from a neuronal network consists of a sequence of spikes in the time evolution of the cell membrane potential.We demonstrate that our CS-based reconstruction method works well even for such spiky time series.We also analyze the dependence of the reconstruction accuracy on the data amount and show that only limited data are required to achieve high accuracy in reconstruction.
The FitzHugh-Nagumo (FHN) Model of Neuronal Dynamics
The FHN model, a simplified version of the biophysically detailed Hodgkin-Huxley model [38], is a mathematical paradigm for gaining significant insights into a variety of dynamical behaviors in real neuronal systems [39,40].For a single, isolated neuron, the corresponding dynamical system is described by the following two-dimensional, nonlinear ordinary differential equations: where V is the membrane potential, W is the recover variable, S(t) is the driving signal (e.g., periodic signal) and a, b and δ are parameters.The parameter δ is chosen to be infinitesimal, so that V (t) and W (t) are "fast" and "slow" variables, respectively.Because of the explicitly time-dependent driving signal S(t), Equation ( 1) is effectively a three-dimensional dynamical system, in which chaos can arise [41].
For a network of FHN neurons, the equations are: where c ij is the coupling strength (weight) between the i-th and the j-th neurons (nodes).For c ij = c ji , the interactions between any pair of neurons are symmetric, leading to a symmetric adjacency matrix for the network.For c ij = c ji , the network is asymmetrically weighted.
Formulation of Reconstruction Problem in the CS Framework
The problem of CS can be stated as follows.Given a low-dimensional measurement vector Y ∈ R M , one seeks to reconstruct the much higher-dimensional vector X ∈ R N according to: where N M and Ψ is an M ×N projection matrix.A sufficiently sparse vector X can be reconstructed by solving the following convex optimization problem [21][22][23][24][25]: where the 2) is a non-autonomous coupling network of N oscillators, driven by external signal S(t).In general, for an isolated oscillator, the dynamics can be written as: where x i ∈ R m is an m-dimensional dynamical variable and S i (t) denotes the external driving.The network equations can then be written as: where W ij ∈ R m×m is the weighted coupling matrix between node i and node j and H is the coupling function.Our goal is to reconstruct the nodal velocity field F i and all of the coupling matrices W using time series x(t) and the given driving signal S(t).First, we group all terms directly associated with node i into F i (x i ), by defining: We have: Then, we choose a suitable base and expand F (x i ) into the following form: where g(γ) i (x i ) are a set of orthogonal and complete base functions, which are chosen such that the coefficients ã(γ) i are sparse.While the coupling function H(x i ), if it is nonlinear, can be expanded in a similar manner, for notational convenience, we assume that they are linear: H(x i ) = x i .We then have: where all of the coefficients ã(γ) i and W ij are to be determined from time series x i via CS.Specifically, the coefficient vector ã(γ) i determines the nodal dynamics, and the weighted matrices W ij give the full topology and coupling strengths of the entire network.
Suppose we have measurements of all state variables x i (t) at M different values of t and assume further that for each t value, the values of the state variables at a slightly later time, t + δt, are also available, where δt ∆t, so that the derivative vector ẋi can be estimated at each time instant.Equation ( 9) for all M time instants can then be written in the following matrix form: where the index k in x k (t) runs from one to N , k = i, and each row of the matrix is determined by the available time series at one instant of time.The derivatives at different times can be written in a vector form as: The coefficients from the functional expansion and the weights associated with all links in the network, which are to be determined, can be combined concisely into a vector a i , as follows: where [•] T denotes the transpose.For a properly chosen expansion base and a general complex network whose connections are typically sparse, the vector a i to be determined is sparse, as well.Finally, Equation ( 9) can be written in the standard CS form as: a linear equation in which the dimension of the unknown coefficient vector a i can be much larger than that of X i , and the measurement matrix G i will have many more columns than rows.In a conventional sense, this equation is ill defined, but since a i is sparse, insofar as its number of non-zero coefficients is smaller than the dimension of X i , the vector a i can be uniquely and efficiently determined by CS [21][22][23][24][25].
Results
We consider the FHN model with sinusoidal driving: S(t) = r sin ω 0 t.The model parameters are r = 0.32, ω 0 = 15.0,δ = 0.005 and b = 0.15.For a = 0.42, an individual neuron exhibits chaos.The time series are generated by the fourth-order Runge-Kutta method with step size h = 10 −4 .We sample three consecutive measurements at time interval τ = 0.05 apart and then use a standard two-point formula to calculate the derivative.Representative chaotic time series and the corresponding dynamical trajectory are shown in Figure 2.
We first present the reconstruction result for an isolated neuron, by setting to zero all coupling terms in Equation (10).The vector a i to be determined then contains the unknown parameters associated with a single neuron only.We choose power series of order four as the expansion base, so that there are 17 unknown coefficients to be determined.We use 12 data points generated from a random starting point.The results of the reconstruction are shown in Figure 3a,b for variables V and W , respectively.The last two coefficients associated with each variable represent the strength of the driving signal.Since only the variable W receives sinusoidal input, the last coefficient in W is nonzero.By comparing the positions of nonzero terms and our previously assumed vector form, g i (t), we can fully reconstruct the dynamical equations of any isolated neuron.In particular, we see from Figure 3a,b that all estimated coefficients agree with their respective true values.Figure 3c shows how the estimated coefficients converge to the true values as the number of data points is increased.We see that, for over 10 data points, we can already reconstruct faithfully all of the parameters.Next, we consider the network of coupled FHN neurons as schematically shown in Figure 1a, where the coupling weights among various pairs of nodes are uniformly distributed in the interval [0.3, 0.4].The network is random with connection probability p = 0.04.From time series, we construct the CS matrix for each variable of all nodes.Since the couplings occur among the variables V of different neurons, the strengths of all incoming links can be found in the unknown coefficients associated with different V variables.Extracting all coupling terms from the estimated coefficients, we obtain all off-diagonal terms in the weighted adjacency matrix.
To assess the reconstruction accuracy, we define E nz as the average normalized difference between the non-zero terms in the estimated coefficients and the real values: where M nz is the number of non-zero terms in the actual coefficients, c k and c k are the k-th nonzero terms in the estimated coefficients and the true one, respectively.For convenience, we define R m as the relative number of data points normalized by the number of total unknown coefficients.Figure 4 shows the reconstructed adjacency matrix as compared with the real one for R m = 0.7.We see that our method can predict all links correctly, in spite of the small errors in the predicted weight values.The errors are mainly due to the fact that there are large coefficients in the system equations, but the coupling weights are small.Using weighted adjacency matrix, we can identify the coupling terms in the vector function F i (x i ), so as to extract the terms associated with isolated nodal velocity field F i .We can then determine the value of parameter a and calculate the largest Lyapunov exponent for each individual neuron.The results are shown in Figure 5a,b.We see that, for this example, Neuron 1 has a positive largest exponent, while the largest exponents for all others are negative, so 1 is identified as the only chaotic neuron among all neurons in the network.
Next, we discuss the relationship between reconstruction error and data requirement.As shown in Figure 6, for different network sizes N , the reconstruction error decreases with R m .For R m larger than a threshold, the normalized error E nz is small.For N = 40, the threshold is about 0.6, and it is 0.5 for N = 60.That is because, for fixed connection probability, a larger network will have more sparse connections, requiring a smaller value of R m for accurate reconstruction.Finally, we study the performance of our method with respect to systematically varying of network size and edge density.As with any method, larger networks require more computation.We study networks of a size up to N = 100 nodes.Figure 7a shows the normalized error associated with the nonzero terms, E nz , for different network sizes N and normalized data amount R m .For a given network size, similar to Figure 6, E nz gradually decreases to a certain low level as the relative data amount R m is increased.We further observe that smaller values of R m are required to reconstruct larger networks of the same connecting probability P .Note that R m is the relative data amount defined with respect to the number of unknown coefficients, so for larger networks, the absolute data amount required actually increases.In Figure 7b, we show the contour plot of values of E nz in the parameter plane (R m , P ) for a fixed network size (N = 60).We see that, for a fixed value of R m , as P is increased, the error E nz also increases, which is anticipated, as denser networks lead to a denser projection matrix in compressive sensing.
Conclusions
We develop a completely data-driven method to detect chaotic elements embedded in a network of nonlinear oscillators, where such elements are assumed to be relatively few.From a biomedical perspective, the chaotic elements can be the source of certain diseases, and their accurate identification is desirable.In spite of being only a few, the chaotic oscillators can cause the time series from other, originally regular oscillators to appear random, due to the network interactions among the oscillators.The standard method in nonlinear time series, the method of delay-coordinate embedding, cannot be used to identify the local chaotic elements, because the method can give information only about the global dynamics.For example, one can attempt to estimate the largest Lyapunov exponent by using time series, either from a chaotic oscillator or from an originally regular oscillator, and the embedding method would yield qualitatively or even quantitatively similar results.Our compressive sensing-based method, however, overcomes such difficulties by generating an accurate estimate of all system equations, which include the local dynamical equations of each individual node and all coupling functions.Isolating the coupling functions from the local velocity fields, we can obtain the original dynamical equations for each individual oscillator, enabling efficient calculation of the Lyapunov exponents for all oscillators and, consequently, accurate identification of the chaotic oscillators.We illustrate this methodology by using model networks of FHN neurons.One key virtue of compressive sensing, namely the low data requirement, enables us to accomplish the task of identifying chaos with short time series.Our method is generally applicable to any nonlinear dynamical networks, insofar as time series from the oscillators are available.
Comparing with our previous works on compressive sensing-based nonlinear system identification and reverse engineering of complex networks [27,28,[30][31][32][33], the new technical features of the present work are the following.Firstly, we demonstrate that the compressive sensing-based system identification is effective for spiky time series that are typical of neuronal networks.Secondly, local velocity fields and non-uniform weights of node-to-node interactions can be reconstructed accurately for neuronal networks with both fast and slow variables in the presence of external driving.Thirdly, the method works regardless of the ratio between the number of originally chaotic and nonchaotic oscillators.The great flexibility, the extreme low data requirement and high accuracy make our method appealing for various problems arising from nonlinear system identification, especially in biology and biomedicine.
There are a number of limitations to our method.For example, for any accessible node in the network, time series of all dynamical variables are required.If information from one node or some of the nodes in the network is inaccessible, or "hidden" from the outside world, it is not feasible to recover the nodal dynamical system of such nodes and their neighbors [31,33].The "hidden dimensions" problem, in which some dynamical variables are not given, is another obstacle to realistic applications.Our compressive sensing-based method also requires reasonable knowledge about the underlying complex system, so that a suitable mathematical base can be identified for expansions of the various nodal and coupling functions.Further efforts are certainly needed.
Figure 1 .
Figure1.(a) Schematic illustration of a small neuronal network, where the dynamics of each neuron is mathematically described by the FitzHugh-Nagumo (FHN) equations.(b,c) Dynamical trajectories of two neurons from the coupled system, one being chaotic when isolated and another regular, respectively.The trajectories give little hint as to which one is originally chaotic and which one is regular, due to the coupling.Specifically, Neuron 1 is originally chaotic (by setting parameter a = 0.42 in the FHN equation), while all other neurons are regular (their values of the corresponding parameter in the FHN equation are chosen uniformly from the interval [0.43, 0.45]).
Figure 2 .
Figure 2. (a) Chaotic time series of the membrane potential V and recovery variable W from a single neuron for a = 0.42; and (b) the corresponding dynamical trajectory.
Figure 3 .
Figure 3. (a,b) Predicted coefficients from compressive sensing (CS) and a comparison with the actual parameter values in the dynamical equations of variables V and W .The number of data points used is 12. (c) Predicted parameters for a single neuron as the number of data points is increased.The sampling interval is ∆t = 0.05.All results are averaged over 10 independent time series.
Figure 4 .
Figure 4.For the network in Figure 1a, (a) the actual and (b) estimated weighted adjacency matrix.The normalized data amount used in the reconstruction is R m = 0.7.
Figure 5 .Figure 6 .
Figure 5. (a) Estimated values of parameter a for different neurons (red circles), as compared with the actual values (black crosses).The random network size is N = 20 with connection probability p = 0.04.The normalized data amount used in reconstruction is R m = 0.7.(b) The largest Lyapunov exponents calculated from the reconstructed system equations.The reference line denotes a null value.
Figure 7 .
Figure 7. (a) For a random network of fixed connecting probability p = 0.04, a contour plot of the normalized error associated with nonzero terms, E nz , in the parameter plane (R m , N ).(b) For a random network of fixed size N = 60, a contour plot of E nz in the parameter plane (R m , P ).All results are obtained from 10 independent network realizations.See the text for explanations. | 6,071.8 | 2014-07-15T00:00:00.000 | [
"Computer Science"
] |
Hybrid autoencoder with orthogonal latent space for robust population structure inference
Analysis of population structure and genomic ancestry remains an important topic in human genetics and bioinformatics. Commonly used methods require high-quality genotype data to ensure accurate inference. However, in practice, laboratory artifacts and outliers are often present in the data. Moreover, existing methods are typically affected by the presence of related individuals in the dataset. In this work, we propose a novel hybrid method, called SAE-IBS, which combines the strengths of traditional matrix decomposition-based (e.g., principal component analysis) and more recent neural network-based (e.g., autoencoders) solutions. Namely, it yields an orthogonal latent space enhancing dimensionality selection while learning non-linear transformations. The proposed approach achieves higher accuracy than existing methods for projecting poor quality target samples (genotyping errors and missing data) onto a reference ancestry space and generates a robust ancestry space in the presence of relatedness. We introduce a new approach and an accompanying open-source program for robust ancestry inference in the presence of missing data, genotyping errors, and relatedness. The obtained ancestry space allows for non-linear projections and exhibits orthogonality with clearly separable population groups.
, where %& and %& "'&$ are the unnormalized genotype and normalized genotype at SNP for individual , respectively, and & is the sample allele frequency for SNP . Then SVD takes as input the normalized genotype matrix "'&$ ∈ ℝ " ! ×$ and decomposes it into a product of three matrices "'&$ = Σ / where Σ ∈ ℝ " ! ×$ is a diagonal matrix of size × ! containing the singular values and the orthogonal matrices ∈ ℝ " ! ×" ! and ∈ ℝ $×$ contain the left and right singular vectors, respectively. The dimension of the input data is then reduced by projecting it onto a space spanned by the top singular vectors. Let 0 ∈ ℝ " ! ×0 and 0 ∈ ℝ 0×0 denote the left singular vectors and the singular values of the first principal components, then the input data in its lower dimensional representation is given by 0 Σ 0 , and the corresponding loading matrix is denoted by 0 ∈ ℝ $×0 . The projected scores of unseen data can be obtained by multiplication of the normalized genotype matrix with 0 .
Unnormalized Principal Component Analysis
UPCA works similarly to PCA, except that SVD takes the unnormalized genotype matrix as input. Interpopulation variation is captured from the second PC onwards, while the first PC represents an average SNP pattern, as is common for PCA on non-centered data. Therefore, the first PC in UPCA can be omitted.
Spectral Decomposition Generalized by Identity-by-State Matrix
SUGIBS was previously proposed as a robust alternative against laboratory artifacts and outliers 14 by applying SVD on the IBS generalized genotype matrix, where IBS information corrects for potential artifacts due to errors and missingness.
Let ∈ ℝ " ! ×" ! denotes the pairwise IBS similarity matrix of the unnormalized genotype matrix , which is calculated following the rules in Table S1. The similarity degree of an individual is defined as %% = ∑ %1 " ! 1 any other individual in the reference dataset. The similarity degree matrix is a diagonal matrix defined as = 9 !! , … , " ! " ! }. SUGIBS works similarly to PCA, except that the IBS generalized genotype matrix ) ! $ is used as input for performing SVD, i.e., ) ! $ = Σ / . Likewise, to UPCA, the first component of SUGIBS aggregates the average SNP pattern and can therefore be omitted. For the projection of unseen samples, we use the second component and onwards 0 2 = { * , … , 03! } where 0 is the th right singular vector.
Given an unseen dataset with * individuals and the same set of SNPs as the reference dataset, let ∈ ℝ " $ ×$ denote its unnormalized genotype matrix. The reference similarity degree is defined as @ %% = ∑̃% 1 where ̃% 1 is the IBS similarity between the th individual in the unseen dataset and the th individual in the reference dataset. The reference similarity degree matrix is defined as B = 9 @ !! , … , @ " $ " $ }. The unseen dataset can be projected onto the reference space following B )! 0 2 .
Autoencoder
An autoencoder consists of two parts: an encoder network and a decoder network. The encoder maps input data to a latent representation = ( + ); the decoder maps the latent representation back to a reconstruction output J = ( ′ + ′) where (•) and (• ) are nonlinear functions, and 2 are the weight matrix, and 2 are the bias vector, , J and are the input data, the reconstructed data and the latent representation, respectively. The network is then trained to minimize the reconstruction error. The objective function takes the form 45 = N P , Q ( )RS ( where is the reconstruction error.
Regularized Autoencoder
To reduce overfitting of the model and improve its performance, regularization-based methods are often used. One widely used regularization is weight-decay 43 , which favors small weights by optimizing the following regularized objective function 45)67 = N P , Q ( )RS + N * 6 ( where hyperparameter controls the strength of the regularization. This encourages sparse weight matrix and thus reduces the redundancy.
Denoising Autoencoder
In a denoising autoencoder 25,26 , the initial input is partially corrupted before training, and then sent through the network. Based on the encoding and decoding of the corrupted input data, it is desirable to predict the original, uncorrupted data as its output. This yields the following objective function: where the corrupted version W of original input is obtained through the process ( W| ).
Denoising Autoencoder with Modified Loss
An additional term favoring robust mapping at the bottleneck/latent space is included in the original objective function of DAE, yielding the following loss function: where hyperparameter controls the emphasis on noise-free projections. The objective now is to learn latent representations that are not only robust for reconstruction, but also at the same time robust for projection.
Implementation Details
The encoder and decoder networks are fully connected feed-forward networks with Leaky ReLU [48] activation functions connecting each layer, except for the last layer of the decoder sigmoid activation is used to ensure the output values are bounded between [0 1]. We used the Adam optimizer [49] with an initial learning rate of 0.001. To allow the optimizer to take smaller steps when training gets close to convergence, we applied a learning rate scheduler to reduce the learning rate of the optimizer by 0.9999 after every epoch. To fit in available GPU memory (11,019MiB), we trained the networks in mini batches of 256 samples. The models are implemented and trained on an NVIDIA GeForce RTX 2080 Ti using PyTorch 1.7.
To implement the early stopping mechanism, we track if the validation loss keeps improving. If the difference of the validation loss between two epochs is below 0.1, it is quantified as no improvement. The early stopping patience was set to be 300 epochs and the maximum number of epochs equaled 3000 when training AE and SAE-IBS. For denoising extensions, every 25 epochs we generated a different simulated noisy dataset and fed to the model, therefore we relaxed the max epoch (to 5000) when training DAE, DAE-L, D-SAE-IBS, and D-SAE-IBS-L. To speed up the learning of SAE-IBS (and its denoising extensions) and to provide a well-initialized embedding from the encoder to apply SVD on, we pre-trained an AE firstly with up to 1000 epochs and continued training SAE-IBS afterward.
Following the suggestions by [50], we experimented with several parameter configurations in two steps: the first one involves the number of layers and the number of hidden units; the second one investigates emphasis on projection loss . If not explicitly stated otherwise, recommended values by default in PyTorch 1.7 [51] were used for any other hyperparameters (amsgrad: False, betas: [0.9, 0.999], eps: 1e-08).
Firstly, the final hyperparameter configuration of the AE model with latent space dimension of 2 was decided. As shown in Table S2, the configuration in bold was selected as the final setting for the experiments of robust projection because it resulted in the smallest validation loss and NRMSD for the simulated missingness experiments, and relatively small NRMSD for the simulated erroneousness experiments. The same procedure was conducted for other tasks and their final settings are listed in Table S3. Then, to ensure fair comparison, the same settings were used when training AE with higher latent space dimensions, denoising variants of AE, and hybrid models. Furthermore, for the experiments of robust projection using DAE-L, we fine-tuned the hyperparameter defining the emphasis on projection loss β based on NRMSD (Table S4 and Table S5). Similarly, this parameter was tuned for D-SAEIBS-L and the final settings are displayed in Table S6 and Table S7. | 2,093 | 2022-06-17T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Versatile surface for solid–solid/liquid–solid triboelectric nanogenerator based on fluorocarbon liquid infused surfaces
ABSTRACT The triboelectric nanogenerator (TENG) is a recent mechanical energy harvesting technology that has been attracting significant attention. Its working principle involves the combination of triboelectrification and electrostatic induction. The TENG can harvest electrical energy from both solid–solid and liquid–solid contact TENGs. Due to their physical difference, triboelectric materials in the solid–solid TENG need to have high mechanical properties and the surface of the liquid–solid contact TENG should repel water. Therefore, the surface of the TENG must be versatile for applications in both solid–solid and liquid–solid contact environments. In this work, we develop a solid–solid/liquid–solid convertible TENG that has a slippery liquid-infused porous surface (SLIPS) at the top of the electrode. The SLIPS consists of a HDFS coated hierarchical Al(OH)3 structure and fluorocarbon liquid. The convertible TENG developed in this study is capable of harvesting electricity from both solid–solid and liquid–solid contacts due to the high mechanical property of Al(OH)3 and the water-based liquid repelling nature of the SLIPS. When the contact occurs in freestanding mode, electrical output was generated through solid–solid/liquid–solid sliding motions. The convertible TENG can harvest electricity from both solid–solid and liquid–solid contacts; thus, it can be a unified solution for TENG surface fabrication.
Introduction
Owing to the rising demand for portable electronics, an increasing number of studies have focused on harvesting electrical energy from ambient sources, including solar [1][2][3], thermal [4][5][6], and salinity difference [7][8][9]. Among these, mechanical energy sources are suitable for harvesting electrical energy since they are less affected by external conditions, such as weather, temperature, and location. Several technologies have been developed for the effective conversion of mechanical energy into electricity, including piezoelectric transducers [10][11][12], and electromagnetic induction [13][14][15]. Among these technologies, the triboelectric nanogenerator (TENG), a recently developed mechanical energy harvesting technology, has been attracting significant attention; its working principle is based on the combination of triboelectrification and electrostatic induction [16][17][18][19]. In typical TENGs, the electrode is covered with a polymer material to maximize the surface charge, after which it is placed in contactseparation with a counter-charged triboelectric material to generate electricity [20][21][22][23]. This counter-charged triboelectric material can be either solid or liquid depending on the working condition [24][25][26][27]. Both solid-solid and liquid-solid contact TENGs have distinct characteristics due to their different physical phases. Due to this difference, these two TENGs require different material properties; triboelectric materials in solid-solid TENGs require high mechanical properties for a long lifespan, and the surface of liquid-solid contact TENGs needs to be water repellent for constant liquid separation [28][29][30][31]. Previous studies have presented these TENGs as separate devices; therefore, the triboelectric surfaces of TENGs were developed separately as well. However, for a TENG to harvest electricity from ambient mechanical energy sources such as wind and raindrops, it must be able to adapt to both solid-solid and liquid-solid contact environments. Therefore, a unified TENG surface that is capable of effectively harvesting electrical energy from both solid-solid and liquid-solid contacts is required.
In this study, we develop a solid-solid/liquid-solid convertible TENG that has a slippery liquid-infused porous surface (SLIPS) at the top of its electrode. On this device, a low-surface tension fluorocarbon liquid (perfluoropolyether, PFPE, Krytox) was placed over a trichloro(1H,1H,2H,2H-perfluorooctyl) silane (HDFS)-coated Al(OH) 3 micro-/nanostructure on the aluminum surface. Due to the large number of fluorine atoms on the surface, both the PFPE liquid and HDFS-coated surface can be negatively charged during the contact-separation process, which can lead to the generation of electrical energy from the solidsolid contact. In addition, the surface can effectively repel water-based liquids; thus, it can induce constant contact and separation between the liquid and the TENG surface. The convertible TENG developed in this study could generate electrical energy when in contact with various water-based liquids, such as tap water, carbonated water, liquor, vinegar, and sports drink. When the contact occurred in the freestanding mode, electrical energy was generated from the solidsolid and liquid-solid sliding motions. Thus, this paper presents a unified TENG surface that can effectively harvest electrical energy from both solid-solid and liquid-solid mechanical input.
The fabrication of HDFS coated Al(OH) 3 surface with PFPE liquid
First, a 1 mm-thick bare aluminum plate was cleaned with deionized water and ethyl alcohol and dried after rinsing. Thereafter, the aluminum plate was dipped into a 0.5 M NaOH (98%, Samchun Chemical, Korea) solution for 1 min and, subsequently, in boiling water for 10 min. To fabricate a hierarchical structure, the aluminum substrate was etched in a 1 M HCl (35-37%, Samchun Chemical, Korea) solution at 80°C for 2 min. The fabricated surface was rinsed with deionized water again and dried at 60°C for 24 h. For the HDFS coating, the aluminum substrate with a hierarchical structure was immersed in a 0.1 v/v% hexane (96%, Samchun Chemical, Korea) solution of HDFS (Gelest, USA) for 10 min at room temperature. Subsequently, the treated sample was thoroughly rinsed with n-hexane and, thereafter, dried at 110°C for 10 min. PFPE liquid was applied on the fabricated aluminum surface and spin-coated at 500 rpm for 1 min. The final fabricated surface was wired and attached to the acrylic substrate for electrical measurement.
Measurements
The electrical measurements including voltage and current measurements were conducted using a mixed domain oscilloscope (MDO 3014, Tektronix Co.) and a low-noise current preamplifier (SR570, Stanford Research Systems Co.). The vertical mechanical input was provided by a vibration tester (ET-126B-4, Labworks Co.) connected to an amplifier (pa-151, Labworks Co.) and function generator (AFG3012C, Tektronix Co.).
Liquid materials
The liquids used in this study are tap water, carbonated water, liquor (Chamisul, 17.8% alcohol, HITEJINRO Co.), vinegar (Apple vinegar, Ottogi Co.), and sports drink (Pocari Sweat, Donga-Otsuka Co.). Figure 1 shows the schematic illustration and magnified images of the micro-/nanostructures on the aluminum surface. As shown in Figure 1(a), the hierarchical structure of Al(OH) 3 is constructed on the aluminum surface. On the outer side of the Al(OH) 3 layer, a self-assembled monolayer coating of HDFS is fabricated. The PFPE liquid is applied to the hydrophobic hierarchical structure to form a SLIPS. The SLIPS is extremely liquidrepellent, and it can effectively repel water-based liquids [32]. In this study, 1 mL of liquid PFPE is applied to the hierarchical structure and spin-coated for 1 min at 500 rpm to form an evenly distributed thin liquid layer. Figures 1(b,c) and S1 are the magnified images of the hierarchical structure taken by FE-SEM. As shown in the image, both a micro-sized stair-like structure (Figure 1 (b)) and a nano-sized wall structure (Figure 1(c)) are formed on the aluminum surface.
Results and Discussion
In TENGs, selecting a material with a high surface charge is important. A high surface charge will facilitate the flow of electrons, which would, in turn, generate a relatively high electrical output. Generally, materials with a high electron affinity have a corresponding high surface charge. This accounts for the high usage frequency of fluoropolymers, such as polytetrafluoroethylene (PTFE), in TENGs. For comparison with the material used in this device, the PFPE liquid can be expressed as F-(CF(CF 3 )-CF2-O) n -CF 2 CF 3 , where n lies within the range of 10-60, and HDFS can be expressed as CF 3 (CF 2 ) 5 CH 2 CH 2 SiCl 3 . Both the PFPE liquid and HDFS contain a large number of fluorine atoms that has a high electron affinity. Therefore, the PFPE liquid-applied HDFS surface, which has a high negative surface charge, can be suitable for the solid-solid contact. In addition, the hierarchical structure on top of aluminum is that of Al(OH) 3 , which has more mechanical properties than PTFE [33,34]. The TENG generates electrical output through mechanical contact and friction; consequently, a long lifespan can be expected with high mechanical properties. Figure 2(a) is a schematic illustration of the solidsolid contact TENG working principle, which is the same as that of the single electrode TENG [35,36]. As shown in the figure, the triboelectric material at the top is positively charged and the PFPE liquid-applied HDFS surface is negatively charged due to repeated contact and separation processes. The aluminum electrode at the bottom is affected by the electric field of the SLIPS. As external pressure is applied to the triboelectric material, its surface approaches the single electrode TENG surface. The electrical equilibrium of the aluminum electrode is disrupted by the electric field on the surface of the triboelectric material, and electrons flow into the aluminum electrode. When the triboelectric material contacts the SLIPS, the aluminum electrode attains electrical equilibrium once more. Once the external pressure is eliminated, the triboelectric material detaches from the SLIPS and electrons flow back to the electrical ground owing to the electric field of the SLIPS. By repeating this process, the TENG can produce alternating current (AC) by the contact-separation process between two solid materials. Figure 2(b,c) show the open-circuit voltage (V OC ) and closed-circuit current (I CC ) outputs of the device, respectively. The TENG was supplied with 6 Hz input using a mechanical vibration tester. As shown in the plot, the TENG generated the highest output when nylon came in contact with the SLIPS. The nylon contact produced high positive peaks, while the PTFE and PVC contacts produced high negative peaks. This is because the PTFE and PVC surfaces became negatively charged when they came in contact with the SLIPS, whereas the nylon surface became positively charged. The SLIPS was formed with the PFPE liquid and HDFS, and for it to produce the highest output when in contact with nylon, it should be negatively charged. When the contact material is nylon, the TENG produces a maximum V OC of 122 V and a maximum I CC of 6.4 μA.
The TENG can generate electrical energy from liquid-solid contact as well because the SLIPS has an excellent liquid repellant property. Figure 3(a) shows the working mechanism of the liquid-solid contact TENG [37,38]. In Figure 3(a), the waterdrop becomes positively charged when it moves through the air and water pipe, and the SLIPS is negatively charged due to the constant contact and separation of the waterdrop. Due to the negatively charged SLIPS, the aluminum electrode will have a positive net charge. When the waterdrop approaches the SLIPS, the positivelycharged waterdrop neutralizes the negatively charged SLIPS; therefore, the electrons will flow from the electrical ground to the aluminum electrode. After the waterdrop attaches completely, there will be a minimal surface area difference as the waterdrop moves toward the edge of the TENG. When the waterdrop separates from the TENG, the electrons will flow back to the ground. A repetition of the waterdrop contact and separation processes produces AC.
In a typical TENG, a thin layer of dielectric material is preferred to effectively induce charges on the electrode [39,40]. In this device, the thickness of the dielectric material is equal to the amount of the PFPE liquid remaining on the hierarchical structure. As shown in Figure 3(b-i), the PFPE liquid forms a flat liquid film when initially applied. However, when spin-coated, it forms a thin liquid film along the hierarchical structure on the aluminum electrode ( Figure 3(b-ii)). This thickness difference of the PFPE liquid affects the power generation of the device. For comparison, two devices with identical surfaces that have equal amounts of the PFPE liquid were prepared. Subsequently, one sample was spin-coated for 1 min at 500 rpm. Afterward, 2 mL of tap water was dropped on each sample from a height of 20 cm for electrical measurement. As shown in the plot of Figure 3(c), the spin-coated devices produced a peak voltage approximately 5 times higher than that produced by the nonspin-coated device. This indicates that the PFPE liquid is able to properly charge the aluminum electrode when the PFPE liquid film is thin. In addition, the peak-like shape of Al(OH) 3 hierarchical structure accumulates the electrical charge and enhances the output accordingly. The SLIPS at the top of the aluminum electrode can repel water-based liquids effectively, including various liquids that are frequently used in everyday life. Figures 3(d) and S2 are photographs of 30 μL-drops of various liquids on a SLIPS, which were taken at 2 s intervals. The surface was tilted 10°for the liquid drop to gravitate toward the edge due to gravitational force. The tested liquids are tap water, carbonated water, liquor (17.8% alcohol), vinegar, and sports drink. As shown in the images, all these liquids slipped to the ground without leaving liquid residues on the surface. Photograph of hierarchical structure without PFPE liquid is shown in Figure S3, after 100 mL of vinegar was poured. As shown in Figure S3, there are many liquid drops pinned on the surface after pouring. These liquid drops left from on the surface would lower the electrical potential difference between liquid and electrode resulting lower output [37]. When 2 mL-liquid drops were dropped from a height of 20 cm, each liquid produced electrical output, as shown in Figure 3(e). The plot represents the maximum peak voltage when each liquid drop was dropped. Although the standard deviations of the voltage peaks are quite large due to the unconstrained nature of the drops, each liquid drop produced 15-20 V on average. This shows the possibility of producing electricity from common used waterbased liquids using a SLIPS.
A single-electrode-mode TENG discussed in previous paragraphs required an electrical ground for electrons to flow in between. For portable applications, having extra components, such as an electrical ground, can be a critical factor. Therefore, in Figure 4(a), two aluminum electrodes with SLIPSs were attached to an acrylic substrate to generate electrical output in freestanding mode. In the freestanding mode, the TENG can effectively convert sliding mechanical input into electricity. Figure 4 (a-i) shows the solid-solid contact freestanding TENG, and Figure 4(a-ii) shows the liquid-solid contact freestanding TENG. For the solid-solid contact TENG, nylon was used as the triboelectric material, and the sliding input was supplied manually (by hand). For the liquid-solid contact TENG, water was sprayed using a commercial shower head. The produced V OC output is shown in Figure 4(b,c), and the current output is shown in Figure S4. As shown in Figures 4(b) and S4 (a), both the V OC and I CC show periodic outputs as the solid triboelectric material slides in between two electrodes. In contrast, Figures 4(c) and S4(b) show rather random peak outputs due to the combination of waterdrops falling on to the surface randomly and waterdrops slipping to the ground. The electrical outputs by the solid triboelectric material and liquid drop show their possible application in unified-surface convertible TENGs.
Conclusions
In summary, we developed a solid-solid/liquid-solid convertible TENG using a PFPE infused surface. Using fluorine abundant materials, the SLIPS could be charged negatively when it came in contact with a counter-charged triboelectric material and utilized in both solid-solid and liquid-solid contact environments. Due to the negatively charged surface, the convertible TENG produced the highest peak V OC output of 122 V and peak I CC output of 6.4 μA when the contact material was solid nylon. In addition, the SLIPS on the convertible TENG was capable of repelling water-based liquids. The convertible TENG could produce 15-20 V peak voltages on average using various common used liquids. To demonstrate the applicability of the solid-solid/liquid-solid convertible TENG, a freestanding mode TENG was developed that could harvest electricity from the sliding mechanical motions of both solid and liquid materials. Therefore, the convertible TENG that can harvest electricity from both solid-solid and liquid-solid contacts can be unified solution for TENG surface fabrication.
Disclosure statement
No potential conflict of interest was reported by the authors. | 3,631.6 | 2020-01-31T00:00:00.000 | [
"Materials Science"
] |
Mechanism of Electronegativity Heterojunction of Nanometer Amorphous-Boron on Crystalline Silicon: An Overview
: The discovery of the extremely shallow amorphous boron-crystalline silicon heterojunction occurred during the development of highly sensitive, hard and robust detectors for low-penetration-depth ionizing radiation, such as ultraviolet photons and low-energy electrons (below 1 keV). For many years it was believed that the junction created by the chemical vapor deposition of amorphous boron on n-type crystalline silicon was a shallow p-n junction, although experimental results could not provide evidence for such a conclusion. Only recently, quantum-mechanics based modelling revealed the unique nature and the formation mechanism of this new junction. Here, we review the initiation and the history of understanding the a-B/c-Si interface (henceforth called the “boron-silicon junction”), as well as its importance for the microelectronics industry, followed by the scientific perception of the new junctions. Future developments and possible research directions are also discussed. best performance with respect to: responsivity, resolution, speed and stability.
Introduction: Initiation and History of Boron-Silicon Junctions and Importance of Si-Based Junctions/Diodes in Microelectronics
The first report about an ultra-shallow rectifying junction (diode) created by a pure boron atmospheric/low-pressure chemical vapor deposition (AP/LPCVD) on crystalline n-type silicon surface was published in 2006 [1]. Initially, the application which led to the development of this novel rectifying junction was: a linear and high Q-factor varactor diode designed for the capacitance tuning of frequency in RF circuits [2,3]. The demonstrated good performance in the varactor application did not attract the expected attention. Fortunately, in 2006 it became clear that a different field of applications would benefit even more from the excellent electrical properties of this extremely shallow junction. This junction would prove useful as an accurate, stable and reliable detector for low-penetration depth radiation such as UV light and low-energy electrons, which are applied in UV optical lithography and scanning electron microscopes.
Since 2006 a significant amount of research has been completed in the following directions: (1) optimization of the critical junction creation process, i.e., the chemical vapor deposition (CVD) of amorphous boron on n-type crystalline silicon (in a method called the "PureB" process); (2) device characterization and design optimization for a variety of applications; and (3) rendering the PureB process CMOS-compatible. Initially it was believed that the excellent electrical properties of the junction-especially the very low saturation currents which are typical for deep p-n junctions-were defined by the p + delta-doping of the n-type substrate which simultaneously occurs during the boron CVD process [1]. It was assumed that the saturation current was mainly dominated by the hole injection from the p + region into the n-substrate, as governed by the Gummel Number (G E ) of this region. The high level of electron injection typically dominating the current in the Schottky diode counterpart was suppressed, although the actual p + region was only a few nanometers thin [4]. However, it was difficult to explain the very high effective G E , keeping in mind the limited solubility of boron in silicon in the applied CVD temperature range from 500-700 • C [5]. Electrical measurements, as presented in [6], showed injection currents as low as a few 10 −20 A/µm 2 , which was comparable to those achieved in deep, heavily doped junctions. This corresponded with a G E in the order of 10 14 -10 15 atm/cm 2 , which was orders of magnitudes higher than what would be expected from nm-shallow junctions formed by bulk-doping the silicon. In the 700 • C PureB process the actual doping of the Si-substrate contributed to G E by roughly 10 12 atm/cm 2 , as documented in [4]. In diodes that were formed solely by such a doping of the Si-substrate, the total current would approach Schottky diode-like values.
The idea of the delta-doped p + layer playing any significant role in the junction formation was completely abandoned when it was demonstrated that similar excellent electrical properties could be achieved by boron CVD on n-type crystalline silicon substrates at temperatures as low as 400 • C, at which no doping of boron in silicon is expected [7].
Later on, in order to overcome the inconsistencies in the above-mentioned concept for the junction formation mechanism, it was suggested that the thickness of the amorphous boron layer was directly responsible for the junction behavior [1,4]. The bulk properties of the amorphous boron layer that could lead to the suppression of the electron injection included either: (i) a very short electron diffusion length and low electron mobility, which could cause quenching of the electron transport [4], or (ii) a wider bandgap than that of the Si, as proposed and supported by simulations in [8]. However, experimental results showed that even for a 1-second (s) boron deposition, where not even a monolayer of boron could be deposited, the junctions were reported to contain an equally high hole injection for both 700 • C and 500 • C depositions [6]. A closer look at the dependency between the thickness of the boron layer and the G E showed that after creating a very thin boron layer with a full coverage of the silicon substrate surface, the injection current stopped decreasing. This observation contradicted to the previous conclusion about the G E -boron thickness relation, and raised serious doubt about the dominant role of bulk amorphous boron for junction creation [6,8].
Thus, when the delta-doped p + layer and the as-deposited boron layer thickness could no longer be considered dominant factors in the junction creation, what remained was the search for an answer in the physics behind the boron-silicon interface. In [6] the following proposition was made: "based on experimental evidence, the effectively high Gummel Number of the p + region, which provides low saturation currents despite the shallowness of the junctions, was related to the formation of a virtually complete surface coverage of acceptor states as an interface property of boron on Si." In a later publication [9] this idea was further developed: "the results can be explained with a simple model assuming a monolayer of acceptor states at the interface that fills with electrons to give a monolayer of fixed negative charge. Furthermore, it can be assumed that the high resistivity of the very thin PureB layer acts as a semi-insulating layer allowing an inversion layer of holes to be built up. (. . . ) The monolayer of n-charge represents a very high electric field that binds the holes to the interface and limits their mobility, similar to the way a vertical electrical field attenuates the inversion layer mobility in MOS devices".
However, this explanation of the junction formation had two major weaknesses: 1.
There was no explanation as to where the "monolayer of acceptor states" providing "a monolayer of fixed negative charge" originates from; 2.
It was not explained what made the charge "fixed".
Despite the fact that the junction formation was most probably correctly allocated, the failure to explain its mechanism led to a "dose of despair". In [5] we read: "However, even if the chemical bonding structure of the interface was well known, translating it into an electrical structure is no straightforward task as can be appreciated evaluating the enormous number of studies devoted to understanding the metal-semiconductor interfaces of Schottky diodes".
Then, in 2017 a completely new concept regarding the junction formation with the PureB process was introduced [10]. It was proposed that this junction should not be considered a p-n type, and it should also not be assigned to any existing types of heterojunctions. That would clarify why the existing "instrumentarium" in semiconductor physics used to explain and predict the properties of known rectifying junctions could not be used successfully here. Instead, a deeper dive into solid-state physics and material science was proposed using a more powerful "scientific weapon"-the theory of quantum mechanics. An analysis of the junction formation was reported in [10], which concluded that the chemical interaction between the surface atoms of crystalline silicon and the first atomic layer of the as-deposited amorphous boron was the dominant factor leading to the formation of a depletion zone in the crystalline silicon originating from the surface. A first-principles quantum mechanics molecular dynamics simulation showed a very strong electric field across the a-B/c-Si interface systems where the charge transfer occurred mainly from the interface Si atoms to the neighboring B atoms. This electric field appeared to be responsible for the creation of a depletion zone in the n-silicon, resulting in a rectifying junction formation. A more detailed introduction of this hypothesis is provided in Section 3. Before that, in Section 2, information is provided on the PureB process and the most attractive electrical and optical characteristics of the boron-silicon junction as a radiation detector.
The PureB Process and Temperature Effects
Amorphous pure boron (a-B) thin films can be deposited on crystalline Si substrates in ultra clean chambers with high purity gasses, by using chemical vapor deposition tools (e.g., cold-wall reactors or hot-wall furnaces) and physical vapor deposition techniques like: molecular beam epitaxy [11,12] and sputtering [13].
Chemical vapor deposition (CVD) of the precursor molecule diborane (B 2 H 6 ) is mostly employed to obtain high-quality ultra-thin (2-10 nm) a-B films. The CVD tools operate in a range of 400-800 • C, with pressure ranging from tens of Torr to atmospheric pressure (760 Torr). For CVD at temperatures higher than 400 • C, the diborane molecules decompose at the surface of the substrate into gas-phase boron hydrates. The most common resulting species is BH 3 [14].
At the beginning of the deposition, the Si surface atoms have exposed dangling bonds to which the BH 3 molecules react, creating a Si-B bond and the release of hydrogen. If the Si dangling bonds are passivated with H atoms, the latter must be desorbed before any Si-B bonds can be formed. In fact, it has been observed that the presence of H 2 gas (generally used as a carrier and diluting gas) in the reactor reduces the a-B deposition rate [15]. When the coverage of the surface Si atoms by the BH x species is accomplished, the BH 3 molecules react with the exposed B dangling bonds, which then generates B-B bonds (along with H/H 2 release). The growth of the a-B film advances according to the adsorption-reaction mechanisms presented in [16].
Upon adsorption, the BH 3 molecules diffuse along the surface before forming a stable bond. At temperatures between 400-600 • C, their surface migration is limited and the a-B grows into islands with the surface reactive sites [16][17][18]. The diffusion is strongly temperature-dependent. Figure 1 illustrates a schematic cross-section of the layer stack of the B-Si junction and the corresponding High-Resolution Tunneling Electron Microscope (HRTEM) images at deposition temperatures of 400 • C and 700 • C [10]. The HRTEM images confirm the lack of boron-silicide (B x Si y ) layer for the samples obtained using the low temperature process (400 • C, Figure 1c), while for the samples prepared at the high temperature (700 • C, Figure 1d) a 1-2 nm-thick boron-silicide (B x Si y ) can be observed. It was also reported that the silicide forms a uniform layer at deposition temperatures higher than 750 • C [15]. For deposition at ∼500 • C and below, the a-B contains significant portions of hydrogen, due to incomplete precursor dissociation, which eventually coalesce into a more rough film [16,17]. When deposition temperatures are higher than 700 • C, the surface diffusivity of the adsorbed precursors is enhanced allowing a smooth, continuous film to be deposited of minimal thickness (a few nm) [16]. The diffusion of B atoms in bulk Si during the CVD of amorphous B becomes significant at temperatures above 750 • C. The experiments also showed that the diffusion rate of a-B at 600 • C is the same for both Si{0 0 1} and Si{1 1 1} surfaces, while at 800 • C, B atoms diffuse faster into the Si{0 0 1} subsurface [17].
Characterization of the Boron-Silicon Junction as a Radiation Detector
In the Introduction we revealed the main application driving the development of the PureB process: the detection of low-penetration depth radiation such as UV photons and low-energy electrons. Here we shall present the most attractive characteristics of the boron-silicon junction, in its application as a photodetector (photodiode). For this purpose we shall use as a reference the characteristics of an ideal photodetector, as presented in Table 1 and Figure 2 [19]. Figure 2 shows the vertical cross section of a silicon p-n junction photodetector and its electric circuit equivalent. Table 1 presents the parameter values of an ideal silicon photodiode, which provide the best performance with respect to: responsivity, resolution, speed and stability. Vertical cross section (a), and electrical equivalent circuit (b) of a silicon p-n junction photodetector, where I D is the dark current, I p is the photogenerated current, I n is the shot noise associated with the dark current, C j is the junction capacitance, R sh is the shunt resistance, R s is the series resistance and R L is the load resistance [19].
Responsivity
The structure of the boron-silicon junction satisfies in an excellent way the three parameters affecting the responsivity: passivation layer thickness, depletion depth and width (see Table 1). The junction can be essentially created by a single layer of boron atoms deposited on the n-type silicon substrate, in such a way that chemical bonds are formed between the boron atoms and the surface silicon atoms.
For reliable protection of the underlying silicon from oxidation and potentially detrimental environment conditions, a few extra boron layers with a total thickness of a few nanometers are deposited in practice. The depletion region, where the photogenerated electron-hole pairs can be separated and collected, starts from the very first atomic layer of the underlying n-type silicon substrate. The lost radiation absorbed in the passivation layer is minimal due to its thinness. Figure 3 shows the measured spectral responsivity of a boron-silicon junction with a ∼5 nm amorphous boron protection layer in the extreme ultra violet (EUV) spectral range, compared with the theoretically attainable values for an ideal Si-based photodetector and a commercial n + p photodiode (SXUV from ODC) [19,20]. The measured responsivity above the silicon edge (12.4 nm) is 0.265 A/W, which is very close to that of an ideal lossless system (0.27 A/W), indicating 100% internal quantum efficiency. The slight drop in responsivity at wavelengths shorter than the silicon edge can be assigned to a very thin silicon absorbing layer above the depletion zone. However, most probably another phenomenon is playing a dominant role here: after the silicon edge, the penetration depth of the photons decreases significantly leading to the absorption of more photons close to the detector surface. The kinetic energy of the freed electrons is high enough to allow them to overcome the internal electric field and to move in an arbitrary direction, with some being lost due to recombination with the holes, while others may even escape from the surface of the detector as secondary electrons. Figure 4 shows the measured spectral responsivity of a boron-silicon photodiode in the deep ultra violet (DUV) and vacuum ultra violet (VUV) spectral ranges [19]. As indicated in the figure, based on the measured responsivity at a 193 nm wavelength (0.0997 A/W) and the theoretical value (0.215 A/W), the quantum efficiency is: QE = 0.0997/0.215 ≈ 0.46. Considering the nearly 100% quantum efficiency measured at a 13.5 nm wavelength (Figure 4), the loss due to the photon-generated electron-hole pairs recombining in the diode depletion region can be regarded as negligible. The main reason for the quantum efficiency drop in the VUV/DUV spectral range, besides the reflection-induced photon loss on the diode surface, is the extremely low penetration depth of the photons in the boron and silicon: only a few nanometers [21]. Because of this, even a 2-nm-thick boron layer will absorb a substantial part of the incident radiation.
Resolution
For achieving a very high resolution it is very important the interface between the silicon substrate and the passivation layer on top of it to be defect free. This is achieved by the PureB CVD process, during which the boron atoms form strong chemical bonds with the surface silicon atoms. The number of silicon dangling bonds (acting as recombination centers is extremely small. This results in an excellent I-V characteristic typical for a highquality deep p-n junction, with a very low reverse-bias (dark) current and an ideality factor very close to 1 ( Figure 5) [22], despite the fact the depletion region starts literally from the boron-silicon interface. A low dark current means a very low shot noise associated with it. The low value of the dark current maintained at high reverse-bias voltage is an evidence for a very high shunt resistance R sh .
Stability
The strong chemical bonds formed between the silicon and the boron atoms provide an excellent radiation shield, as they cannot be destroyed by UV photons or low-energy electrons. Experiments with extensive exposure up to 220 kJ/cm 2 of a boron-silicon photodiode at 13.5 nm radiation could not reveal a measurable degradation of the responsivity [23]. A very small amount of responsivity degradation was observed in the VUV spectrum. Figure 6 shows the responsivity degradation of three boron-silicon junctions exposed to 121 nm radiation (radiation around 120 nm wavelength is considered the most challenging in the VUV spectrum) [19]. The difference between the three samples is the oxygen content on the surface, expressed as a thickness in nanometers. In this experiment high exposure levels are not necessary, as any available drop in responsivity is evident almost immediately at the start of the VUV exposure, subsequently settling to its lower level. The presence of oxygen is assigned to local oxidation of silicon through pin holes in the thin boron layer which only partially covers silicon. With VUV exposure the oxidized silicon surface is positively charged due to the secondary electron emission, temporarily reducing the responsivity. As can be seen in Figure 6, with 1 nm oxide the degradation is extremely small-within the margin of uncertainty of the measurement equipment. It is important to mention that this kind of degradation is recoverable with time as the positive charge dissipates very slowly. Another factor influencing the photodiode stability is the working environment. In this aspect, the boron-silicon junction demonstrates very high robustness to harsh working conditions. Boron itself is a very stable material at room temperature. Furthermore, a nanometer-thin amorphous boron layer, when completely covering the underlying silicon, acts as an excellent barrier protecting the silicon substrate from detrimental environmental elements such as hydrogen radicals and oxygen plasma, used for surface cleaning purposes. Extensive exposure to such elements has not resulted in noticeable deterioration of the electrical or optical characteristics of the boron-silicon photodiode [23].
Operational Speed
For a fast reaction to pulsed radiation, the photogenerated charge must be removed quickly from the depletion region of the photodetector and delivered to the interface electronics. For this purpose the time constant of the detector-defined by the junction capacitance C j and the series resistance R s -must be small (Figure 2b). The value of the series resistance is dominated by the sheet resistance of the surface of the detector. This is because after separation, the photogenerated charge must reach the top ring electrode (Figure 2a) by moving along the surface of the detector. However, due to the high resistivity of the very thin boron layer and the fact that the depletion region starts from the silicon surface, the sheet resistance is very high. This makes the boron-silicon photodiode extremely slow. Furthermore, unlike a typical p-n junction detector-where the time constant can be decreased by reducing C j with the application of a higher reverse-bias voltage, the same approach does not work well with the boron-silicon detector. Just the opposite, the time constant of the boron-silicon detector increases with a higher reverse-bias voltage despite the reduction in the junction capacitance. This is because apparently the series resistance increases faster than the decrease in the capacitance [24].
A solution to this problem is to trade some responsivity for the benefit of the time constant. For example, deploying a metal Al grid on top of the boron layer, which covers just 1% of the surface, leads to a dramatic reduction in the sheet resistance and the time constant, respectively [19].
Boron-Silicon Junction Formation Premise Based on ab Initio Modeling
To gain insight into the preparation processes and the structural and electronic properties of the boron-silicon heterojunction, we modeled the decomposition of B 2 H 6 molecules and the deposition of BH n (n = 1-4) molecular/radicals on a Si substrate at the early stages of the PureB processes [10,25,26], and a-B/c-Si interfaces [10,27]. Here we briefly review our recent theoretical work on the local structure, chemistry and electronic properties of a-B/c-Si interfaces. At present, Si{0 0 1} wafers are used to prepare boron-silicon heterojunctions. The unusual structure of the Si{1 1 1} surfaces and commercial availability of Si{1 1 1} wafers stimulate us to include a-B/Si{1 1 1} interfaces in our study, as well. All simulations were performed using the first-principles code VASP (Vienna Ab initio Simulation Package). This code is based on a pseudo-potential plane-wave approach within the density-functional theory (DFT) [28]. It employs the projector augmented-wave (PAW) method [29], and it allows variable fractional occupation number, which works well for interfaces between insulators and metals [28,30]. An ab initio molecular dynamics (AIMD) simulation employs the finite-temperature density-functional theory of the one-electron states, where the exact energy minimization and calculation of the exact Hellmann-Feynman forces occur after each MD step using both the preconditioned conjugate techniques, and the Nose dynamics to generate a canonical NVT ensemble [28]. The exchange and correlation terms are described using the generalized gradient approximation (GGA-PBE) [31]. For electronic structure calculations, we used cut-off energies of 400.0 eV for the wave functions, 550.0 eV for the augmentation functions, and dense grids in the irreducible Brillouin zone (BZ) of the cells [32]. For the AIMD simulations we used cut-off energies of 250.0 eV and the Γ-point in the BZs. This is due to the whole system lacking periodicity in such crystal/amorphous interfaces [28,33].
We created amorphous B by first equilibrating the samples at 3000 K for 2000 iterations (at 1.5 fs per iteration, which totals 3 ps) and then cooling systems to the desired temperature. Next the obtained a-B samples were placed on the crystalline Si substrates (c-Si), forming a-B/c-Si interfaces for the subsequent AIMD simulations. The prepared a-B/c-Si systems were then allowed to equilibrate at 1000 K with the Si atoms in the substrate pinned. Then, all the atoms including the substrate were allowed to relax at 1000 K over a period of 6 ps. Finally we relaxed the atoms at 0 K to eliminate internal forces and stress. Figure 7 displays snapshots of the relaxed a-B/Si{0 0 1} and a-B/Si{1 1 1} interfaces from the AIMD simulations with the inputs, settings and treatments presented in the previous section. From Figure 7 we find the following features common to both interfaces:
Local Chemistry of the c-Si/a-B Interfaces
1.
The Si atoms in the substrates are positioned in an orderly fashion, whereas the B atoms remain disordered; 2.
There is a spacing separating the crystalline Si and amorphous B at both the Si{0 0 1}/a-B (Figure 7a) and the Si{1 1 1}/a-B (Figure 7d) interfaces; 3.
There is a certain amount of disordering for the surficial Si atoms at both substrates. Figure 7. Snapshots of (a,d) the equilibrated interface and (b,c) related typical Si coordination for the Si{0 0 1}/a-B and the Si{1 1 1}/a-B interfaces, respectively. The green spheres represent B and blue Si.
A closer look reveals subtle differences between the two interfaces. The spacing between the c-Si substrate and a-B at a-B/Si{1 1 1} is apparently larger than that at a-B/Si{0 0 1}. Moreover, the surficial Si atoms at a-B/Si{0 0 1} have more B neighbors than those at a-B/Si{1 1 1}. We analyzed the Si-B bonding at both interfaces with about 20 nterfaces each. The cut-off of the Si-B bonds is 2.28 Å, which is 10 % longer than the average value of the B-B bond (1.79 Å) and the Si-Si bond-length (2.35 Å) in the elemental solids, respectively, taking into account the exponential decay of bond strength as a function of interatomic distance [34]. The results are plotted in Figure 8.
As shown in Figure 8, most surficial Si (88%) at the a-B/Si{1 1 1} interfaces have only one B neighbor. Another 10% of the interfacial Si atoms are coordinated with two B atoms. The Si coordination of the surficial Si atoms at the a-B/Si{0 0 1} interfaces is more complex. The surficial Si atoms with two B neighbors are dominant at the a-B/Si{0 0 1} interface (57%). 29% of the interfacial Si atoms have three B neighbors, a relatively small amount of the interfacial Si (9%) have one B neighbor, and only 4% of the surface Si atoms have four B neighbors. The larger variety of Si coordination at the a-B/Si{0 0 1} interfaces is related to the reduced symmetry constraint from the Si substrates, as each superficial Si is bonded to only two Si atoms at the subsurface. The different local Si-B bonding indicates variation in the B arrangements at the interfaces. The dominant surficial Si atoms with one B and three Si atoms at a-B/Si{1 1 1}, and those Si with two B neighbors and two Si neighbors at a-B/Si{0 0 1}, satisfy the sp 3 type hybridization for Si [35]. The statistical analysis also produced the averaged spacing between surficial Si and the neighboring B atoms. A larger spacing (2.0 Å) was obtained at the a-B/Si{1 1 1} interface compared to that at the a-B/Si{0 0 1} interface (1.2 Å).
Electronic Properties of the c-Si/a-B Interfaces
Electronic structure calculations were performed for the relaxed interfaces. Fractions of the obtained electron density distributions are shown in Figure 9. Based on the electron densities in the interface systems, we analyzed the charges at each atomic site using the Bader charge model [36]. The charges obtained at the atomic sites at both interfaces are plotted in Figure 10. As shown in Figure 9, the electron clouds form regular shapes and are concentrated around the Si-Si bonds in the substrates. This corresponds to the crystalline structures and their covalent nature. Meanwhile, electron clouds in the a-B part show irregular forms of high electron density, which corresponds to the local disordering. At the interfacial region, there are clear clouds for both interfaces between the interfacing Si-B atoms, which indicate chemical bonding. Figure 9 also shows that the electron clouds are more present at the B atoms, which is an indication of charge transfer from the Si atoms to the B atoms. At the a-B/Si{1 1 1} interface, each surficial Si is coordinated to one B with the electron clouds forming regular shapes, whereas at a-B/Si{0 0 1}, each Si has two or three B neighbors with dense electron clouds (see Figure 9 for both). Figure 10 includes the charges at the atomic sites at both interfaces. Clearly, the Si and a-B atoms located away from the interfacial layers are electronically neutral. Charge transfer only occurs from interfacial Si atoms to interfacial B atoms. Analysis results revealed an average amount of charge transfer of 0.75 e/Si (4.7 × 10 18 e/m 2 ) at a-B/Si{0 0 1}, and 0.40 e/Si (2.7 × 10 18 e/m 2 ) at a-B/Si{1 1 1}. These values correspond to the number of Si-B bonds at the interfaces. These values are smaller than those from the ionic model (Si 2+ at a-B/Si{0 0 1} and Si + at a-B/Si{1 1 1}). This is indicative of a bond of a strong covalent nature between the interfacing Si and B atoms (electronegativity value is 2.04 on Pauling scale for B and 1.90 for Si).
The charge transfer from the interfacial Si to B induces charge barriers at the a-B/c-Si interfaces. The formation of the charge barriers is essential for the heterojunctions/diodes. It also causes band bending of the heterojunctions.
Band Bending for the Electronegativity Junctions
Our AIMD simulations and electronic structure calculations for the a-B/c-Si interfaces revealed the formation of well-separated Si-B interfaces. Charge transfer occurs from the interfacial Si to B, forming Si +q /B -q polar plates. Moreover, our study also showed that amorphous B located away from the interfaces is intrinsically a 'bad' metal with localized defect states. Therefore, at a-B/c-Si interfaces, the Fermi level of a-B will be changed near the interface due to extra electrons from Si filling of the defect states. Based on the results above and the semiconducting nature of bulk Si, we can build a band bending model for the boron-silicon heterojunctions. Together with the charge model, it is schematically shown in Figure 11. The charge transfer and corresponding charge barriers cause band bending at the a-B/c-Si interfaces, as schematically shown in Figure 11b. The boron-silicon heterojunction properties essentially originate from the charge transfer occurring at the interfaces due to the differences between the electronegativities of Si and B.
The abundance of positive charge in the top atomic layer of the n-type silicon substrate acts as a highly doped p-region in a p-n junction, attracting free electrons from the bulk n-type silicon, and leading to the formation of a depletion region [10,27]. The significant amount of positive static charge at the surface of the silicon n-type substrate explains the high Gummel Number and the low saturation current of the boron-silicon junction, which is typical for high-quality deep p-n junctions.
Conclusions
Many research endeavors throughout the history of science, aside from achieving their primary research goals, have managed to generate unexpected additional new knowledge. We believe that this is also the case with the development of PureB technology. For many years the research efforts have been mainly focused on the development of the process itself. Only recently has the rectifying junction created through the PureB process-demonstrating properties not typical for a shallow p-n junction and any other existing rectifying junctionattracted more attention. In this paper we presented the main features of the PureB process and the qualities of the boron-silicon junction as a radiation detector. We also discussed how the understanding of the boron-silicon junction formation has evolved. The recently proposed quantum-mechanical junction formation mechanism presented proves the power of quantum-mechanics-based approaches to solid-state physics and materials science, offering a world of rich variety.
Overall, our understanding is that the chemical interaction between the surface atoms of crystalline silicon and the first atomic layer of the amorphous boron is the dominant factor leading to the rectifying function of boron-silicon junctions. Obviously, boron doping is not present in this model and thus the boron-silicon junction does not belong to p+n type junctions. Although a-B exhibits a high density of states in the band gap of Si, these states are of a localized nature. The electrons in a-B are conducted via a hopping mechanism, thereby disqualifying the a-B/c-Si diodes from the category of Schottky junctions. Furthermore, the a-B/c-Si diodes cannot be classified into any of the existing types of heterojunctions in semiconductor physics.
The new junction has the application potential of reaching far beyond its primary target. The new junction formation mechanism may lead to a technology breakthrough in a number of scientific fields such as: semiconductor wide-bandgap material processing, electron microscopy, optics, space exploration, chemical engineering, quantum mechanics, nano-materials, etc. Furthermore, this type of junction may trigger many innovations in industry, e.g., direct ultraviolet detection and imaging, energy harvesting (solar cells), etc.
New possible research directions may include: (i) better understanding of the quantum phenomena responsible for the formation of the boron-silicon junction; (ii) creation of an analytical model for the boron-silicon junction and extending this model to other semiconductor materials, such as SiC and wide-bandgap materials; and (iii) study of the electrical, optical and mechanical properties of devices developed for different applications, based on the boron-silicon junction technology.
Author Contributions: S.N.: writing Section 1 and Section 2.2, contributing to Conclusions; P.S.: writing Section 2.1; C.F. and P.X.F.: writing Section 3, contributing to Conclusions. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,550.2 | 2021-01-26T00:00:00.000 | [
"Physics"
] |
Reward (Mis)design for Autonomous Driving
This article considers the problem of diagnosing certain common errors in reward design. Its insights are also applicable to the design of cost functions and performance metrics more generally. To diagnose common errors, we develop 8 simple sanity checks for identifying flaws in reward functions. These sanity checks are applied to reward functions from past work on reinforcement learning (RL) for autonomous driving (AD), revealing near-universal flaws in reward design for AD that might also exist pervasively across reward design for other tasks. Lastly, we explore promising directions that may aid the design of reward functions for AD in subsequent research, following a process of inquiry that can be adapted to other domains.
Introduction
Treatments of reinforcement learning often assume the reward function is given and fixed. However, in practice, the correct reward function for a sequential decision-making problem is rarely clear. Unfortunately, the process for designing a reward function (i.e., reward design)-despite its criticality in specifying the problem to be solved-is given scant attention in introductory texts. 1 For example, Sutton and Barto's standard text on reinforcement learning [45, pp. 53-54, 469] devotes merely 4 paragraphs to reward design in the absence of a known performance metric. Anecdotally, reward design is widely acknowledged as a difficult task, especially for people without considerable experience doing so. Further, Dulac-Arnold et al. [14] recently highlighted learning from "multi-objective or poorly specified reward functions" as a critical obstacle hampering the application of reinforcement learning to real-world problems. Additionally, the problem of reward design is highly related to the more general problem of designing performance metrics for optimization-whether manual or automated optimization-and is equivalent to designing cost functions for planning and control (Section 2), making a discussion ✩ This paper is part of the Special Issue: "Risk-aware Autonomous Systems: Theory and Practice". Relationships of terminology among related fields. Note that fitness and objective are similar in meaning to return but not identical, as we describe in Section 2. Terms like "fitness" and "utility" that do not have accompanying terminology in the rightmost column (and have an "-" instead) do not necessarily describe performance metrics that can be expressed as Markovian reward functions. (See Abel et al. for recent work on expressing non-Markovian performance metrics as Markovian reward functions [1]. ) We will largely use the terminology of RL, but we will refer to trajectory-level performance metrics as utility functions, since "return function" is not a common concept. Also, where feasible, this article's discussions focus on trajectory-level utility functions, Gs, rather than reward functions. We do so for clarity, having judged subjectively that the consequences of a utility function are more analytically accessible than those of a reward function.
The challenge of reward design for autonomous driving
We now consider four challenges of reward design for AD. We note however that these challenges apply widely to other tasks, particularly those that occur in the physical world and therefore can have effects beyond the typically envisioned scope of the task.
Utility function depends on numerous attributes First, driving is a multiattribute problem, meaning that it encompasses numerous attributes that each contribute to the utility of driving. These attributes may include measurements of progress to the destination, time spent driving, collisions, obeying the law, fuel consumption, vehicle wear, passenger experience, and various impacts on the world outside the vehicle. These external impacts include those on people in other cars, pedestrians, bicyclists, and the government entity that builds and maintains driving infrastructure, as well as pollution and the climate more broadly. Defining a trajectory-performance metric G for AD requires (1) identifying all such attributes and specifying them quantitatively and (2) combining them into a utility function that outputs a single real-valued number. 4 Utility function depends on a large and context-dependent set of stakeholders Second, the utility function G should conform to stakeholders' interests. In AD, these stakeholders might include users such as passengers; end consumers; partnering businesses like taxi and ride-sharing companies; automotive manufacturers; governmental regulators; providers of research funding; nearby pedestrians, passengers, bicyclists, and residents; and broader society. These stakeholders and the weight given to their interests will differ among vehicles (e.g. with different manufacturers) and will even differ for the same car in different contexts. One such context that could affect G is the driving region, with which values, preferences, and driving culture may differ substantially. Therefore ideal reward design for AD might require designing numerous reward functions (or Gs, more generally). Alternatively, the aforementioned stakeholder-based context could be part of the observation signal, permitting a single monolithic reward function. Such a reward function would allow a policy to be learned across numerous stakeholder-based contexts and generalize to new such contexts.
Lack of rigorous methods for evaluating a utility function Third, when choosing a utility function for algorithmic optimization in some context(s), a critical question arises: given a set of stakeholders, how can one utility function be deemed better or worse than another, and by how much? We have not yet found research on how to measure the degree of a utility function's conformity to stakeholders' interests. We also have not found formal documentation of current common practice for evaluating a utility function. Anecdotally, such evaluation often involves both subjectively judging policies that were trained from candidate utility functions and reflecting upon how the utility function might be contributing to undesirable behavior observed in these trained policies. Setting aside the dangers of allowing specific learning algorithms to inform the design of the utility function (see the discussion of trial-and-error reward design in Section 4.6), this common approach has not been distilled into a specific process, has not been carefully examined, and in practice varies substantially among different designers of reward functions or other utility functions. However, our sanity checks in Section 4 represent useful steps in this direction. Table 1 Sanity checks one can perform to ensure a reward function does not suffer from certain common problems. Each sanity check is described by what problematic characteristic to look for. Failure of any of the first 5 sanity checks identifies problems with the reward function; failure of the last 3 checks should be considered a warning.
Sanity check failures
Brief explanation Potential intervention(s) 1 Unsafe reward shaping If reward includes guidance on behavior that deviates from only measuring desired outcomes, reward shaping exists.
Separately define the true reward function and any shaping reward. Report both true return and shaped return. Change it to an applicable safe reward shaping method. Remove reward shaping. 2 Mismatch in people's and reward function's preference orderings If there is human consensus that one trajectory is better than another, the reward function should agree.
Change the reward function to align its preferences with human consensus.
Undesired risk tolerance via indifference points
Assess a reward function's risk tolerance via indifference points and compare to a human-derived acceptable risk tolerance.
Change reward function to align its risk tolerance with human-derived level. 4 Learnable loophole(s) If learned policies show a pattern of undesirable behavior, consider whether it is explicitly encouraged by reward.
Remove encouragement of the loophole(s) from the reward function. 5 Missing attribute(s) If desired outcomes are not part of reward function, it is indifferent to them.
Add missing attribute(s). 6 Redundant attribute(s) Two or more reward function attributes include measurements of the same outcome.
Eliminate redundancy. 7 Trial-and-error reward design Tuning the reward function to improve RL agents' performances has unexamined consequences.
Only use observations of behavior to improve the reward function's measurement of task outcomes or to tune separately defined shaping reward.
8 Incomplete description of problem specification Missing descriptions of reward function, termination conditions, discount factor, or time step duration may indicate insufficient consideration of the problem specification.
In research publications, write the full problem specification and why it was chosen. The process might reveal issues.
Elicits naïve reward shaping A fourth difficulty may be specific to designing a per-step feedback function like a reward function or a cost function. Driving is a task domain with delayed feedback, in that much of the utility of a drive is contained at its end (e.g., based on whether the goal was reached). For drives of minutes or hours, such delayed information about performance renders credit assignment to behavior within a trajectory difficult. In part because of this difficulty, reward shaping appears quite tempting and its naïve application is extremely common in research we reviewed (see Section 4.1).
Sanity checks for reward functions
In this section we develop a set of 8 conceptually simple sanity checks for critiquing and improving reward functions (or cost functions, equivalently). Table 1 summarizes these sanity checks. Many of the tests apply more broadly to any (trajectory-level) utility function. We demonstrate their usage through critically reviewing reward functions used in RL for AD research.
The 19 publications we review include every publication on RL for autonomous driving we found that had been published by the beginning of this survey process at a top-tier conference or journal focusing on robotics or machine learning [13,23,29,24,8,53,21,48,37,28,30,25,20], as well as some publications from respected venues that focus more on autonomous driving [32,9,55,33,46,4]. Of the 19 publications, we arbitrarily designated 10 as "focus papers", for which we strove to exhaustively characterize the reward function and related aspects of the task description, typically through detailed correspondence with the authors (see Section 4.6). These 10 focus papers are detailed in Appendix A and Appendix C.
We present 8 sanity checks below, 5 each in their own detailed subsections and 3 in the final subsection, Section 4.6. These tests have overlap regarding what problems they can uncover, but each sanity check entails a distinct inquiry. Their application to these 19 publications reveals multiple prevalent patterns of problematic reward design.
Identifying unsafe reward shaping
In the standard text on artificial intelligence, Russell and Norvig assert, "As a general rule, it is better to design performance metrics according to what one actually wants to be achieved in the environment, rather than according to how one thinks the agent should behave" [42, p. 39]. In the standard text on RL, Sutton and Barto [45, p. 54] agree in almost the same phrasing, adding that imparting knowledge about effective behavior is better done via the initial policy or initial value function. More succinctly, specify how to measure outcomes, not how to achieve them. Exceptions to this rule should be thoughtfully justified.
Yet using rewards to encourage and hint at generally desirable behavior-often with the intention of making learning more efficient and tractable when informative reward is infrequent or inaccessible by most policies-is intuitively appealing. This practice has been formalized as reward shaping, in which the learning agent's received reward is the sum of true reward and shaping reward. The boundary between these two types of rewards is not always clear when the "true" objective is not given, such as in AD. Nonetheless, some rewards are more clearly one of the two types.
The dangers of reward shaping are well documented [40,35,27]. These dangers include creating "optimal" policies that perform catastrophically. Perhaps worse, reward shaping can appear to help by increasing learning speed without the reward designer realizing that they have, roughly speaking, decreased the upper bound on performance by changing the reward function's preference ordering over policies.
There is a small canon of reward-shaping research that focuses on how to perform reward shaping with certain safety guarantees [35,56,5,12,18]. Safety here means that the reward shaping has some guarantee that it will not harm learning, which differs from its colloquial definition of avoiding harm to people or property. A common safety guarantee is policy invariance, which is having the same set of optimal policies with or without the shaping rewards. We generally recommend that attempts to shape rewards be informed by this literature on safe reward shaping. For all techniques with such guarantees, shaping rewards are designed separately from the true reward function. Also, if possible, the utility function G that arises from the true reward should be equivalent to the main performance metric used for evaluating learned policies.
We now formulate the above exposition as a sanity check. Unsafe reward shaping can be identified by first identifying reward shaping-without regard to safety-and then determining whether the designers of the reward function are either following a known safe reward shaping method or have a persuasive argument that their shaped rewards are safe.
Application to AD Acknowledging the subjectivity of classifying whether reward is shaped when shaping is not explicitly discussed, we confidently judge that, of the 19 publications we surveyed, 13 included reward shaping via one or more attributes of their reward functions [13,29,24,32,53,9,21,48,37,28,30,46,20]. Another 2 included reward attributes which could arguably be considered reward shaping [54,4]. Examples of behavior encouraged by reward shaping in these 13 publications are staying close to the center of the lane [24], passing other vehicles [32], not changing lanes [21], increasing distances from other vehicles [53], avoiding overlap with the opposite-direction lane [13,29], and steering straight at all times [9]. Other examples can be found in Appendix B. All of these encouraged behaviors are heuristics for how to achieve good driving-violating the aforementioned advice of Russell, Norvig, Sutton, and Barto-and it is often easy to construct scenarios in which they discourage good driving. For example, the reward shaping attribute that penalizes changing lanes [21] would discourage moving to lanes farther from other vehicles or pedestrians, including those acting unpredictably. Many of the examples above of behaviors encouraged by shaping rewards might be viewed as metrics that are not attributes of the true utility function yet are highly and positively correlated with performance. As Amodei et al. [3] discussed, rewarding behavior that is correlated with performance can backfire, since strongly optimizing such reward can result in policies that trade increased accumulation of the shaping rewards for large reductions in other performance-related outcomes, driving down the overall performance. This concept has been memorably aphorized as Goodhart's law: "When a [proxy] measure becomes a target, it ceases to be a good measure." [44,17].
Ostensibly, a similar criticism could be aimed at measures that should be part of the true reward function. For instance, assume that reducing gas cost and avoiding collisions are attributes of the true utility function. Then reducing gas cost could discourage accelerating, even when doing so would avoid a potential collision. The critical difference, however, between attributes measuring desired outcomes and reward-shaping attributes is that trading off two or more true attributes of the utility function can result in higher overall utility and therefore be desirable. In our example above, reducing a correctly weighted gas cost would presumably have negligible effect on the frequency of collisions, since the benefit of avoiding collisions would far outweigh the benefit of reducing gas cost. Another perspective on this difference is that effective optimization can increase reward-shaping attributes at the expense of overall utility, whereas effective optimization may increase true utility attributes at the expense of other utility attributes, but not at the expense of overall utility.
Of the 13 publications which use reward functions we are confident are shaped, 8 were in the set of 10 focus papers [13,29,24,32,53,9,21,48]. None of the 8 papers explicitly described the separation between their shaping rewards from their true rewards, and none discussed policy invariance or other guarantees regarding the safety of their reward shaping. Of these 8 papers, only Jaritz et al. and Toromanoff et al. acknowledged their usage of reward shaping, and only the former discussed its undesirable consequences. Jaritz et al. write "the bots do not achieve optimal trajectories ... [in part because] the car will always try to remain in the track center", which their reward function explicitly incentivizes. Further, in most of these 8 papers with reward shaping, the performance of learned policies was not compared in terms of their return but rather according to one or more other performance metrics, obscuring how much the undesirable behavior (e.g., frequent collisions) was a result of the RL algorithm's imperfect optimization or of the reward function it was optimizing against. In their work on learning reward functions, Ibarz et al. [22] provide other useful examples of how to analyze an alternative reward function against returns from the true reward function. 3 of these 8 papers did report return [53,21,9].
Comparing preference orderings
Although it is difficult for humans to score trajectories or policies in ways that are consistent with utility theory, simply judging one trajectory as better than another is sometimes easy. Accordingly, one method for critiquing a utility function is to compare the utility function's trajectory preferences to some ground-truth preferences, whether expressed by a single human or a decision system among multiple stakeholders, such as the issuance of regulations. This preference comparison , where ≺ means "is less preferred than". Finding a τ A and τ B for which this statement is false indicates a flaw in the utility function but does not by itself evaluate the severity of the flaw. However, severity is implied when one trajectory is strongly preferred. Note that we focus here on evaluating a utility function, which differs from learning a utility function from preferences over trajectories or subtrajectories (see Section 5.4), after which this sanity check and others can be applied.
Application to AD We apply this comparison by choosing two trajectories such that, under non-exceptional circumstances, one trajectory is strongly preferred. We specifically let τ crash be a drive that is successful until crashing halfway to its destination and let τ idle be the safe trajectory of a vehicle choosing to stay motionless where it was last parked. Fig. 2 illustrates τ crash and τ idle . Of the 10 focus papers, 9 permit estimating G(τ crash ) and G(τ idle ). Huegle et al. [21] does not (see Appendix C). For the calculation of these utilities here and later in this article, we assume reward is not temporally discounted in the problem specification-which is generally considered correct for episodic tasks [45, p. 68] like these-despite nearly all papers' adherence to the current best practice of discounting future reward to aid deep reinforcement learning solutions (as discussed by Pohlen et al. [39]).
We presume that any appropriate set of stakeholders would prefer a vehicle to be left idle rather than to proceed to a certain collision: τ crash ≺ τ idle . Yet of these 9 evaluated reward functions, only 2 have reward functions with the correct preference and 7 have reward functions that would prefer τ crash and its collision. These 7 papers are identified on the left side of Fig. 3, under τ idle ≺ τ crash . We do not calculate utilities for the reward functions of the 9 papers that were not in the set of focus papers, but our examination of these other reward functions suggests a similar proportion of them would likewise have an incorrect ordering. Calculation of returns for these trajectories allows a much-needed sanity check for researchers conducting RL-for-AD projects, avoiding reward functions that are egregiously dangerous in this particular manner.
Comparing indifference points
A more complex form of preference comparison reveals problems in the 2 papers that passed the test in Section 4.2. For three trajectories τ A ≺ τ B ≺ τ C , the continuity axiom of utility theory states that there is some probability p such that a rational agent is indifferent between (1) τ B and (2) sampling from a Bernoulli distribution over τ A and τ C , where τ C occurs with probability p [52]: This indifference point p can often be compared to a ground-truth indifference point derived from human stakeholders that reveals their risk tolerance.
The use of indifference points for AD below is an exemplar of a widely applicable methodology for testing a reward function. This methodology entails choosing τ B to be a trajectory that can be achieved with certainty, often by some default year olds) [47], as well as a rough estimate of km per collision for a drunk 16-17 year old (from applying a 37x risk for blood alcohol concentration ≥ 0.08, as estimated by Peck et al. [38]). *The task domain of Jaritz et al. [24] was presented as a racing video game and therefore should not be judged by real-world safety standards.
behavior or inaction. τ A and τ C are then chosen to contain two possible outcomes from a risky departure from default behavior or inaction, where τ C is a successful risky outcome and τ A is an unsuccessful risky outcome. From another perspective, τ A is losing a gamble, τ B is not gambling, and τ C is winning a gamble. One set of examples for (τ A , τ B , τ C ) in a domain other than AD includes trajectories from when a video game agent can deterministically finish a level with only one action: (seeking more points but dying because time runs out, finishing the level without seeking additional points, getting more points and finishing the level before time runs out).
Application to AD To apply this test of preferences over probabilistic outcomes to AD, we add τ succ to τ crash and τ idle from Section 4.2, where τ succ is a trajectory that successfully reaches the destination. τ crash ≺ τ idle ≺ τ succ . Therefore, choosing p amounts to setting permissible risk of crashing amongst otherwise successful trips. In other words, a policy that has higher risk than this threshold p is less preferred than one that refuses to drive at all. Human drivers appear to conduct similar analyses, sometimes refusing to drive when faced with a significant probability of collision, such as during severe weather. Fig. 3 displays the calculated p converted to a more interpretable metric: km per collision 5 at which driving is equally preferable to not deploying a vehicle. For comparison, we also plot estimates of police-reported collisions per km for various categories of humans. These human-derived indifference points provide very rough bounds on US society's indifference point, since drunk driving is considered illegal and 16-17 year old US citizens are permitted to drive. As the figure shows, of those 9 focus papers that permit this form of analysis, 0 require driving more safely than a legally drunk US 16-17 year old teenager. The most risk-averse reward function by this metric [8] would approve driving by a policy that crashes 2000 times as often as our estimate of drunk 16-17 year old US drivers. An argument against this test-and more broadly against requiring the utility function to enforce driving at or above human-level safety-is that penalizing collisions too much could cause an RL algorithm to correctly learn that its current policy is not safe enough for driving, causing it to get stuck in a conservative local optimum of not moving. This issue however can potentially be overcome by creating sufficiently good starting policies or by performing reward shaping explicitly and rigorously. Further, there is a significant issue with the argument above, the argument that the reward function should encourage the RL algorithm to gather driving experience by being extremely lenient with collisions. In particular, whether a specific weighting of a collision penalty will effectively discourage collisions without making the vehicle avoid driving is dependent on the performance of the current policy. As the RL algorithm improves its driving, the effective collision-weighting values would generally need to increase. The true reward function is part of a task specification and should not change as a function of the policy. However, such dynamic weighting could be achieved by defining the true reward function to have a risk tolerance that is desirable in real-world situations then adjusting the weight given to collisions via a form of dynamic reward shaping: the collisions weight would start small and be gradually increased to scaffold learning while the policy improves, eventually reaching its true weight value. In this strategy, reward shaping is temporary and therefore policy invariance is achieved once shaping ends.
Identifying learnable loopholes
Once a designed reward function is used for learning a policy, observable patterns of undesirable behavior might emerge. When such behavior increases utility, it is often referred to as reward hacking or specification gaming (terms that implicitly and unfairly blame the agent for correctly optimizing a flawed utility function). Colloquially, such technically legal violations [40,2,35], which often involve learned trajectories that actually loop in physical space to repeatedly accrue some reward but prevent long-term progress (see Fig. 4). 6 However, these loopholes can also be subtle, not dominating performance but nonetheless limiting it, and in such cases loopholes are more difficult to find through observations of learned behavior. In general, both subtle and blatant loopholes might be found by observing undesirable behavior and reflecting that the reward function encourages that behavior. A more rigorous improvement on such reflection is to estimate the utility (or return, equivalently) for an observed trajectory that contains the undesirable behavior and also estimate the utility for that trajectory modified minimally to not contain the undesirable behavior; if the less desirable trajectory receives more utility, then a loophole likely exists. An alternative perspective on this improvement is that it is a method for finding two trajectories with which to perform the sanity check of comparing preference orderings (Section 4.2).
Application to AD We do not observe blatantly catastrophic loopholes in the RL-for-AD literature, perhaps because most reward functions for AD thus far appear to have been designed via trial and error (see Section 4.6); in such trial-and-error design, any such catastrophic loopholes would likely get caught, and the reward function would then be tuned to avoid them. However, the learned-policy limitation that Jaritz et al. [24] discussed (see Section 4.1) is an example of a learned loophole in RL for AD.
Missing attributes
Utility functions sometimes lack attributes needed to holistically judge the performance of a trajectory. Specifically, as discussed in Amodei et al. [3], omission of an attribute "implicitly expresses indifference" regarding that attribute's measurements of outcomes. A simple example in autonomous driving is to ignore passenger experience while including an attribute that increases utility for making progress towards a destination; the learned behavior may exhibit high acceleration and jerk, which tend to harm passenger experience but do not directly affect this example utility function. Section 3 contains a list of potential attributes for autonomous driving that may help in this evaluation. Identifying missing attributes and adding them is one solution. When inclusion of all missing attributes is intractable, Amodei et al. propose penalizing a policy's impact on the environment. However, this approach, called impact regularization, shares potential issues with reward shaping.
Application to AD Since the set of required reward function attributes for AD is undetermined, we assume for the sake of analysis that the abstract attributes emboldened in Section 3 are the complete and necessary set for AD, and we then assess whether our surveyed reward functions are missing one or more of these attributes. This exercise is highly subjective, given that these desired attributes are not mathematically specified. For instance, Cai et al. [8] penalize acceleration and time until reaching the goal, which one might argue together approximately cover fuel consumption. And one might also argue that acceleration addresses passenger experience, though it does not include seemingly important aspects of passenger experience such as jerk and illusions of danger (e.g., braking safely but later than a human would). By our own judgment of the 10 reward functions in Appendix A, all are missing at least one attribute. The attributes that were most commonly included in some form were time spent driving, collisions, and progress to the destination, the combination of which is exemplified most purely in Isele et al.'s simple reward function [23].
Sanity checks for which failure is a warning
In addition to the sanity checks described above, other methods can be used to raise red flags, indicating potential issues in the reward function that warrant further investigation. Because the descriptions of these sanity checks are relatively short, the discussions of their applications to AD are less explicitly separated than for previously described sanity checks.
Redundant attributes Similarly, a utility function may have redundant attributes that encourage or discourage the same outcomes. Such overlap can overly encourage only part of what constitutes desirable outcomes. The overlap can also complicate a reward designer's understanding of the attributes' combined impact on the utility function. For instance, consider an autonomous driving utility function that includes two attributes, one that penalizes collisions and one that penalizes repair costs to the ego vehicle. Both attributes penalize damage to the ego vehicle from a collision. Yet each attribute also includes a measurement of outcomes that the other does not: a collision penalty can discourage harm to people and external objects, and a repair-costs penalty discourages driving styles that increase the rate of wear. When redundant attributes exist in the utility function, solutions include separating redundant aspects of multiple attributes into a new attribute or removing redundant aspects from all but one attribute. Executing these solutions is not always straightforward, however. In our example above, perhaps the collision penalty could be separated into components that measure harm to humans and animals, harm to external objects, and increased repair costs for the ego vehicle's maintainer. Then the repair-costs component of the collision penalty could be removed, since it is already contained within the penalty for overall repair costs.
Trial-and-error reward design If a publication presents its reward function without describing the reward design process, we suspect that it was likely designed through a process of trial and error.
This trial-and-error reward design process involves designing a reward function, testing an RL agent with it, using observations of the agent's learning to tune the reward function, and then repeating this testing and tuning process until satisfied. This process is also described by Sutton and Barto [45, p. 469]. Since the reward function itself is being revised, typically one or more other performance metrics are employed to evaluate learned policies. Those performance metrics could be based on subjective judgment or be explicitly defined.
One issue with this trial-and-error reward design is that the specification of the reinforcement learning problem should not in principle be adjusted to benefit a candidate solution to the problem. More practically, another issue is that this manual reward-optimization process might overfit. In other words, trial-and-error reward design might improve the reward function's efficacy-whether measured by the reward designer's subjective evaluation, another performance metric, or otherwise-in the specific context in which it is being tested, but then the resultant reward function is used in other untested contexts, where its effects are unknown. Factors affecting trial-and-error reward design include both the RL algorithm and the duration of training before assessing the learned policy. In particular, after the designer chooses a final reward function, we suspect they will typically allow the agent to train for a longer duration than it did with the multiple reward functions evaluated during trial-and-error design. Further, any comparison of multiple RL algorithms will tend to unfairly favor the algorithm used during trial-and-error design, since the reward function was specifically tuned to improve the performance of that RL algorithm.
Two types of trial-and-error reward design do appear appropriate. The first type is when intentionally designing a reward shaping function, since reward shaping is part of an RL solution and is therefore not changing the RL problem. The second type is when observations of learned behavior change the designer's understanding of what the task's trajectory-level performance metric should be, and the reward function is changed only to align with this new understanding.
Trial-and-error reward design for AD is widespread: of the 8 publications whose authors shared their reward design process with us in correspondence, all 8 reported following some version of defining a linear reward function and then manually tuning the weights or revising the attributes via trial and error until the RL algorithm learns a satisfying policy [13,23,8,21,32,9,53,48]. Based on informal conversations with numerous researchers, we suspect trial-and-error reward design is widespread across many other domains too. Yet we are unaware of any research that has examined the consequences of this ad hoc reward-optimization process.
Incomplete problem specification in research presentations Many publications involving RL do not exactly specify aspects of their RL problem specification(s). Only 1 of the 10 focus papers thoroughly described the reward function, discount factor, termination conditions and time step duration used [29]. We learned the other 9 papers' specifications through correspondence with their authors. We conjecture that a broader analysis of published reward functions would find that such omission of problem specification details is positively correlated with reward design issues. In the absence of such confirmatory analysis, we nonetheless encourage authors to write the full problem specification, both for the reader's benefit and because the practice of writing the problem details may provide insights.
Exploring the design of an effective reward function
In contrast to the previous section-which focuses on how not to design a reward function-this section instead considers how to design one. This section contains exploration that we intend to be preliminary to a full recommendation of a specific reward function or of a process for creating one. Again, AD serves as our running example throughout this section.
Performance metrics beyond RL
One potential source of reward-design inspiration is the performance metrics that have been created by communities beyond RL researchers.
For AD specifically, prominent among these metrics are those used by regulatory agencies and companies developing AD technology. Many such metrics express the distance per undesirable event, which incorporates two critical utility-function attributes: making progress and avoiding failure states like collisions. Specifically, the California Department of Motor Vehicles (DMV) requires reporting of miles per disengagement, where a disengagement is defined as the deactivation of the vehicle's autonomous mode and/or a safety driver taking control from the autonomous system. Criticisms of this metric include that it ignores important context, such as the complexity of driving scenarios, the software release(s) being tested, and the severity of the outcomes averted by the safety drivers' interventions. The California DMV also requires a report to be filed for each collision, through which a miles per collision measure is sometimes calculated [15]. This metric is vulnerable to some of the same criticisms. Also, note that disengagements often prevent collisions, so safety drivers' disengagement preferences can decrease miles per disengagement while increasing miles per collision, or vice versa; therefore, the two complementary metrics can be combined for a somewhat clearer understanding of safety. Another metric is miles per fatality, which addresses the ambiguity of a collision's severity. In contrast to the 8 attributes we listed in Section 3 as important for an AD utility function, each of the above metrics only covers 2 attributes-distance traveled and a count of an undesirable event.
Performance metrics for AD have also been designed by other robotics and artificial intelligence communities. In particular, per-time-step cost functions developed by the planning and control communities could be converted to reward functions by multiplying their outputs by −1. Additionally, insight might be gained by examining the reward functions learned by techniques reviewed in Section 5.4. However, a review of such cost functions and learned reward functions is beyond the scope of this article.
An exercise in designing utility function attributes
To get a sense of the challenges involved in expressing reward function attributes, let us consider in detail three attributes that we listed previously in Section 3: progress to the destination, obeying the law, and passenger experience. We assume that each attribute is a component in a linear reward function, following the common practice. For simplicity, we focus on the scenario of driving one or more passengers, without consideration of other cargo.
We see one obvious candidate for progress to the destination. 7 This approach initially appears reasonable but nonetheless has at least one significant issue: each equal-distance increment of progress has the same impact on the attribute. To illustrate, consider two policies. One policy always stops exactly halfway along the route. The other policy reaches the destination on half of its trips and does not start the trip otherwise. With respect to the progress-to-the-destination attribute proposed above, both polices have the same expected performance. Yet the performance of these two policies do not seem equivalent. For our own commutes, we authors certainly would prefer a ride-sharing service that cancels half the time than a service that always drops us off halfway to our destination.
This dilemma leaves open the question of how to calculate progress to the destination as an attribute. Perhaps it should be based upon the utility derived by the passenger(s) for being transported to some location. We suspect this utility would generally be lowest when the drop-off point is far both from the pickup location and from the destination. A highly accurate assessment of this utility would seemingly require information about what the passenger(s) would do at any drop-off location, including their destination. For instance, if a drug store is the destination, a passenger's utility at various drop-off locations will differ greatly based on whether they plan to purchase urgently needed medication or to buy a snack. Such information is unlikely to be available to autonomous vehicles, but nonetheless a coarse estimate of a passenger's utility for a specific drop-off location could be made with whatever information that is available. Also, perhaps passengers could opt in to share that reaching their destination is highly consequential for them, allowing the driving policy to reflect this important information.
Obeying the law is perhaps even trickier to define precisely as an attribute. This attribute could be expressed as the penalties incurred for breaking laws. Unfortunately, such penalties come in different units, with no obvious conversion to the other units-such as fines, time lost interacting with law enforcement and court systems, and maybe even time spent incarcerated-making difficult the combination of these penalties into a single numeric attribute. An additional issue is that modeling the penalties ignores the cost to others, including to law enforcement systems. If such external costs are included, the attribute designer might also want to be careful that some of those external costs are not redundantly expressed in another attribute, such as a collisions attribute. A further challenge is obtaining a region-by-region encoding of driving laws and their penalties.
Passengers are critical stakeholders in a driving trajectory, and passenger experience appears important because their experiences are not always captured by other metrics like collisions. For example, many people have experienced fear as passengers when their driver brakes later than they prefer, creating a pre-braking moment of uncertainty regarding whether the driver is aware of the need to slow the vehicle, which they are. Though the situation was actually safe in hindsight, the late application of brakes created an unpleasant experience for the passenger. To define passenger experience as an attribute, one candidate solution is to calculate a passenger-experience score through surveys of passenger satisfaction, perhaps by averaging all passengers' numeric ratings at the end of the ride. However, surveys rely on biased self report and might be too disruptive to deploy for every trajectory. In experimental design, when surveys are asking for respondents' predictions of their own behavior, it is advisable to consider whether instead the experiment could rely on observations of behavior; this guideline prompts the question of whether future passenger behavior-such as choosing to use the same AD service again-might be more useful as part of the attribute than direct surveys. Lastly, passenger experience is an underdefined concept, despite our confidence that it is important to include in the overall AD utility function, leaving open the question of what exactly to measure.
Combining these attributes is also difficult, since the units of each attribute might not be straightforwardly convertible to those of other attributes. The reward function is commonly a linear combination of attributes; however, this linearity assumption could be incorrect, for example if the utility function needs to be the result of a conjunction over attributes, such as a binary utility function for which success is defined as reaching the destination without collision. Also, weight assignment for such linearly expressed reward functions is often done by trial and error (see Section 4.6), possibly in part because the researchers lack a principled way to weigh attributes with different units.
A financial utility function for AD
One potential solution to the challenge of how to combine attributes is to express all attributes in the same unit, so they can be added without weights. Specifically, a financial utility function might output the change in the expectation of net profit across all stakeholders caused by τ . Utilities expressed in currency units are common in RL when profit or cost reduction are the explicit goals of the task, such as stock trading [34] and tax collection [31], but we are unaware of its usage as an optimization objective for AD.
To create such a financial utility function for AD, non-financial outcomes would need to be mapped to financial values, perhaps via an assessment of people's willingness to pay for those outcomes. We have been surprised to find that some non-financial outcomes of driving have a more straightforward financial expression than we initially expected, providing optimism for this strategy of reward design. For example, much effort has gone towards establishing a value of statistical life, which allows calculation of a monetary value for a reduction in a small risk of fatalities. The value of statistical life is used by numerous governmental agencies to make decisions that involve both financial costs and risks of fatality. The US Department of Transportation's value of statistical life was $11.6 million US Dollars in 2020 [51,50].
Methods for learning a reward function
Instead of manually designing a reward function, one can instead learn one from various types of data. Methods of learning reward functions include inverse reinforcement learning from demonstrations [36,60], learning reward functions from preferences over trajectory segments [57,10,7], and inverse reward design from trial-and-error reward design in multiple instances of a task domain [19]. Traditionally, these approaches assume that reward is a linear function over pre-specified attributes and that only the weights are being learned, so the challenges of choosing attributes remain. Approaches that instead model reward via expressive representations like deep neural networks [58,10,16] could avoid this challenge. Another issue is that there is no apparent way to evaluate the result of reward learning without already knowing the utility function G, which is not known for many tasks like autonomous driving; blindly trusting the results of learning is particularly unacceptable in safety-critical applications like AD. For these methods, nearly all of the sanity checks we present in Section 4 could provide partial evaluation of such learned reward functions, since the checks are agnostic to how the reward function was created. The exception is the seventh sanity check, trial-and-error reward design, although the potential issues of overfitting the reward function through such ad hoc design might also be present with learned reward functions that result from optimizing a trajectory-level performance measure (e.g., in [43]).
Although manual design of reward functions and learning reward functions may appear to be mutually exclusive approaches, they can be used complementarily. For instance, the aforementioned approach of inverse reward design [19] involves learning a single reward function that explains aspects of multiple manually designed reward functions. More generally, if further research finds that learning reward functions is more performant than manual reward design, we nonetheless suspect that manual reward design could provide information that helpfully informs the learning process, such as providing a prior over reward functions.
Multi-objective reinforcement learning
Multi-objective approaches [41] are an alternative to defining a single utility or reward function for optimization. In particular, the combination of the attributes of the utility function could be left undefined until after learning. Such an approach may fit well with autonomous driving, for which some attributes change over time. For example, the frequent price changes of petroleum gasoline or electricity could be accounted for by proportionally re-weighting the fuel costs attribute. Many multi-objective reinforcement learning algorithms evaluate sets of policies such that the best policy for a specific utility function parametrization can later be estimated without further learning. For linear utility functions-including those that arise from linear reward functions-successor features and generalized policy improvement [11,6] are promising techniques to make decisions under a changing utility function without requiring further task experience. Much of this article's inquiry and the sanity checks in Section 4 apply to the choice of utility-function parametrization that often must be made after multi-objective RL to enable task execution.
Conclusion
In the US alone, 1.9 trillion miles were driven in 2019 [49]. Once autonomous vehicles are prevalent, they will generate massive amounts of experiential data. Techniques like reinforcement learning can leverage this data to further optimize autonomous driving, whereas many competing methods cannot do so (e.g., behavioral cloning) or are labor-intensive (e.g., manual design of decision-making). Despite the suitability of RL to AD, it might not play a large role in its development without well-designed objectives to optimize towards. By using AD as a motivating example, this article sheds light on the problematic state of reward design, and it provides arguments and a set of sanity checks that could jump start improvements in reward design. We hope this article provokes conversation about reward specification-for autonomous driving and more broadly-and adds momentum towards a much-needed sustained investigation of the topic.
From this article, we see at least six impactful directions for further work. First, specifically for AD, one could craft a reward function or other utility function for AD that passes our sanity checks, includes attributes that incorporate all relevant outcomes, addresses the issues discussed in Section 5.2, and passes any other tests deemed critical for an AD utility function. The second direction supports the practicality of learning from a utility function that passes our first three sanity checks. Specifically, in the work we reviewed, the challenges of exploration and reward sparsity were partially addressed by reward shaping and low penalties for collisions. One could empirically demonstrate that, while learning from a reward function that passes the corresponding sanity checks, these challenges can instead be addressed by other methods. Third, the broad application of these sanity checks across numerous tasks would likely lead to further insights and refinement of the sanity checks. Fourth, one could develop more comprehensive methods for evaluating utility functions with respect to human stakeholders' interests. Fifth, trial-and-error reward design is common in AD and, if we extrapolate from informal conversations with other researchers, is currently the dominant form of reward design across RL tasks in general. Investigation of the consequences of this ad hoc technique would shed light on an unsanctioned yet highly impactful practice. Finally, the community would benefit from research that constructs best practices for the manual design of reward functions for arbitrary tasks.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. (W911NF-19-2-0333), DARPA, Lockheed Martin, GM, Bosch, and UT Austin's Good Systems grand challenge. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research.
Appendix A. Reward functions in the 10 focus papers
In this Appendix section, we describe reward functions and other problem specification details from the 10 papers that we evaluate in Section 4. We also closely examined 9 other publications, but because the patterns in the 10 focus papers below were so consistent, we did not pursue clarifying these 9 additional publications' problem specifications sufficiently for us to be able to characterize them with confidence at the same level of detail as the focus papers described below.
In this section, papers are listed alphabetically by first author's last name. A † marks information obtained in part or fully through correspondence with an author of the paper. Additionally, we include time limit information but do not typically report whether the RL agent is updated with a terminal transition when time expires and an episode is stopped, since we rarely have such information. However, we suspect that most of these time limits are meant to make training feasible and are not actually part of the problem specification, and we further suspect that agents are correctly not updated with a terminal transition upon time exhaustion.
A.1. LeTS-drive: driving in a crowd by learning from tree search [8] Reward function The reward function is the unweighted sum of 3 attributes, with the authors' stated purpose of each in brackets:
Time step duration Time steps are 100 ms.
Discount factor The discount factor γ = 1. † Episodic/continuing, time limit, and termination criteria The task is episodic. Regarding the time limit, episodes are computationally stopped after 120 seconds (in simulation time) / 1200 time steps, which is used for calculating the success rate. However, if a trajectory is stopped at 120 seconds, none of the trajectory is used to update the value function, making it somewhat optimistic. † Termination criteria are collisions and the agent reaching the goal. † A.2. Model-free deep reinforcement learning for urban autonomous driving [9] Reward function The reward function is the unweighted sum of 5 attributes: Reward is calculated for every 100 ms CARLA time step, but it is received by the agent only at its decision points, every 400 ms. That received reward is the sum of the four 100 ms rewards.
Time step duration Time steps are effectively 400 ms because "frame skip" is used to keep the chosen action unchanged for 4 frames, each of which is 100 ms (as in other research within the CARLA simulator).
Discount factor
The discount factor γ = 0.99 and was applied at every decision point (i.e., every 400 ms). † Episodic/continuing, time limit, and termination criteria The task is episodic. The time limit is 500 time steps (i.e., 50 s). † Termination criteria are getting to the goal state, collisions, leaving the lane, and running out of time. † [13] Reward function The reward function is the weighted sum of 5 attributes:
A.3. CARLA: an open urban driving simulator
These attributes are • d, the change in distance traveled in meters along the shortest path from start to goal, regularly calculated using the ego's current position; • v, the change in speed in km/h; • c, the change in collision damage (expressed in range [0, 1]); • s, the change in the proportion of the ego vehicle that currently overlaps with the sidewalk; and • o, the change in the proportion of the ego vehicle that currently overlaps with the other lane.
Time step duration Time step information is not described in the paper, but 0.1 s is the time step duration reported by all other papers that use the CARLA simulator and report their time step duration.
Discount factor The first author suspects γ = 0.99 (which is typical for A3C). † Episodic/continuing, time limit, and termination criteria The task is episodic. The time limit for an episode is the amount of time it would take a car to follow the shortest path from the start state to a goal state, driving at 10 km/h. Termination occurs upon collision or reaching the goal. [21] Reward function The reward function is the unweighted sum of 3 attributes: Discount factor The discount factor γ = 0.99. † Episodic/continuing, time limit, and termination criteria The task is continuing. † There is no time limit nor termination criterion. † A.5. Navigating occluded intersections with autonomous vehicles using deep reinforcement learning [23] Reward function The reward function is the unweighted sum of 3 attributes:
A.4. Dynamic input for deep reinforcement learning in autonomous driving
• −0.01 given every step; • −10 if a collision occurred (and 0 otherwise); and • +1 when the agent successfully reaches the destination beyond the intersection (and 0 otherwise).
Time step duration
Time steps are 200 ms, though some actions last 2, 4, or 8 time steps (i.e., the agent may frame skip).
Discount factor
The discount factor γ = 0.99. † Episodic/continuing, time limit, and termination criteria The task is episodic. In unoccluded scenarios, episodes are limited to 20 s. In occluded scenarios, episodes are limited to 60 s. Termination occurs upon success, running out of time, or collision.
A.6. End-to-end race driving with deep reinforcement learning [24] Reward function Four different reward functions are evaluated, three of which are new and intentionally add reward shaping: Above, v is the vehicle velocity component in the direction of the lane's center line, d is the distance from the middle of the road, α is the difference between the vehicle's heading and the lane's heading, and w is the road width.
Time step duration Time step duration is 33 ms, with 1 step per 30 FPS frame. † The game will pause to await the RL agent's action.
Discount factor
The discount factor γ = 0.99. † Episodic/continuing, time limit, and termination criteria The task is episodic. No time limit is enforced. The termination criteria are the vehicle stops progressing or the vehicle goes off-road or the wrong direction.
A.7. CIRL: controllable imitative reinforcement learning for vision-based self-driving [29] Reward function The reward function is the unweighted sum of 5 attributes: • penalty for steering angles in ranges assumed incorrect for current command (e.g., going left during a turn-right command); • speed (km/h), with penalties for going too fast on a turn and limits to speed-based reward when not turning (to keep under a speed limit); • -100 upon collision with vehicles or pedestrians and -50 upon collision with anything else (e.g., trees and poles) (and 0 otherwise); • -100 for overlapping with the sidewalk (and 0 otherwise); and • -100 for overlapping with the opposite-direction lane (and 0 otherwise).
Time step duration Time steps last 100 ms, the same as has been reported in other research conducted within the CARLA simulator.
Discount factor
The discount factor γ = 0.9. Episodic/continuing, time limit, and termination criteria The task is episodic. The time allotted for an episode is the amount of time to follow the "optimal path" to the goal at 10 km/h. Termination occurs upon successfully reaching the destination, having a collision, or exhausting the allotted time.
A.8. Deep distributional reinforcement learning based high-level driving policy determination [32] Reward function The reward function is the unweighted sum of 4 attributes: • v− 40 40 , where v is speed in km per hour within the allowed range of [40,80]
Time step duration
The time step duration is the time between frames in the Unity-based simulator. The correspondence of such a frame to seconds in the simulated world was unknown to the first author. † Discount factor The discount factor γ = 0.99. Episodic/continuing, time limit, and termination criteria The task is episodic. There is no time limit. † Termination occurs upon collision with another vehicle or when the ego vehicle travels the full track length (2500 Unity spatial units † ), effectively reaching a goal.
A.9. Learning hierarchical behavior and motion planning for autonomous driving [53] Reward function The reward function is defined separately for transitions to terminal and non-terminal states. For transitions to terminal states, reward is one of the following: The reward for a single non-terminal high-level behavioral step is the negative sum of the costs of the shorter steps within the best trajectory found by the motion planner, which executes the high-level action. Expressed as the additive inverse of cost, this reward for a high-level behavioral step is the unweighted sum of these 3 attributes: , which rewards speeds close to the desired speed, v ref ; • −1 1+ t |v(t)| , which rewards based on distance traveled; and Discount factor Discount factor γ = 0.99. Episodic/continuing, time limit, and termination criteria The task is episodic. The time limit is the time required to travel the length of a randomly generated route in a CARLA town at 10 km/h. Episodes are terminated when the goal is reached or any of the following occur: a collision, driving out of the lane, or a red light violation.
A.10. End-to-end model-free reinforcement learning for urban driving using implicit affordances [48] Reward function The reward function outputs r = r speed + 0.5 × r dist + 0.5 × r heading and is in [−1, 1].
• r speed : 1 − |s desired − s ego |/40, † an attribute in [0, 1] (40 km/h is the maximum s desired † ) that is inversely proportional to the absolute difference in the actual speed and the desired speed; • r dist : −d path /d max , an attribute in [−1, 0], where d max = 2.0 and d path is the distance in meters from the closest point on the optimal path's spline; • r heading : clip(−1, 0, b × |θ ego − θ path |) † , an attribute in [−1, 0], where θ ego is the ego vehicle's heading and θ path is the heading of the optimal path spline at its closest point; and • −1 upon termination (0 otherwise).
To determine reward, an optimal path is created via the waypoint API. This path is optimal with simplifying assumptions. In training, this waypoint-generated optimal path is generated by randomly choosing turn directions at intersections.
The unit for speeds is km/h, and the unit for angles is degrees. The desired speed, s desired , is hard coded based on the presence of traffic lights and obstacles, using "priveleged" information available only during training. d max is half the width of a lane (which apparently is always 2 m in CARLA). In the formula for r heading , b = (−1/10) when the optimal path's accompanying high-level command is to follow the lane or go straight through an intersection, and b = (−1/25) when the command is to turn left or right through an intersection. † We note that this reward function appears somewhat supervisory, teaching the target speed in various contexts and to stay close to the precomputed path. Further, the context required to calculate reward is not available to the agent during testing.
Time step duration Time steps last 100 ms, the same as has been reported in other research conducted within the CARLA simulator.
Discount factor Discount factor γ = 0.99. † Episodic/continuing, time limit, and termination criteria The task is episodic. There is no time limit during training. During training, there were no successful terminations (i.e., upon reaching a destination); instead, driving continued along a procedurally generated route until an undesirable termination condition was met. † Termination conditions are when the agent is further from the optimal path than d max , collisions, running a red light, and having 0 speed when neither behind an obstacle nor waiting at a red traffic light.
Appendix B. Reward shaping examples per paper
Of the 19 papers we reviewed, we are highly confident that 13 include reward shaping. Below is at least one example per paper of behavior discouraged or encouraged by reward shaping. The following examples are discouraged behavior, penalized via negative reward: • deviating from the center of the lane [24,48,37,46], • changing lanes [21], • overlapping with the opposite-direction lane [13,29], • overlapping a lane boundary [20], • delaying entering the intersection when it is the ego vehicle's right of way at a stop sign [28], • deviating from steering straight [9], • side-ways drifting (on a race track) [30], • getting close to other vehicles [20], and • having the turn signal on [46]. • passing other vehicles [32] and • increasing distances from other vehicles [53].
Additionally, 2 papers had reward attributes that were somewhat defensible as not constituting reward shaping. The reward function in [55] also includes a penalty correlated with the lateral distance from center of the lane, but their reinforcement learning algorithm is explicitly a lower-level module that receives a command for which lane to be in, and one could argue the subtask of this module is to stay inside that lane. However, being perfectly centered in the lane is not part of that high-level command, so we think the argument for considering it to be reward shaping is stronger than the argument for the alternative. A reward attribute in [4] encourages being in the rightward lane if no car is in the way, which fits laws in some US states that require drivers to keep right unless passing. Therefore, whether their reward function includes reward shaping depends on the laws for the location of the ego vehicle.
Appendix C. Calculation of trajectory returns
This appendix section describes how we estimate the return for various trajectories under each reward function. These calculations are used for two related sanity checks for reward functions: comparing preference orderings (Section 4.2) and comparing indifference points (Section 4.3).
Recall that we estimate returns for 3 different types of trajectories: • τ crash , a drive that is successful until crashing halfway to its destination; • τ idle , the safe trajectory of a vehicle choosing to stay motionless where it was last parked; and • τ succ , a trajectory that successfully reaches the destination.
We additionally remind readers that an indifference point is calculated by solving the following equation for p when In this paper, we calculate it specifically with G(τ idle ) = pG(τ succ ) To illustrate, assume that for successful-until-collision τ crash , G(τ crash ) = −10; for motionless τ idle , G(τ idle ) = −5; and for successful τ succ , G(τ succ ) = 10. Recall that the indifference point p is where the utility function has no preference over τ idle and a lottery between τ crash and τ succ according to pG(τ succ ) + (1 − p)G(τ crash ). For this example, −5 = (p × 10) + ((1 − p) × −10), and solving the equation results in p = 0.25. Therefore, the utility function would prefer driving (more than not driving) with any ratio higher than 1 success to 3 crashes. Let us also assume that a path length is 1 km. Therefore, at the indifference point, for each collision the car would drive 0.33 km on successful drives and half of 1 km on a drive with a collision, which is Calculations of safety indifference points are given below for the two papers for which τ crash ≺ τ idle ≺ τ succ [8,23].
In our descriptions below, we try to name every significant assumption we make. Readers might find it useful to choose different assumptions than ours to test how sensitive our analysis is to these assumptions; we expect the reader will find no changes in the qualitative results that arise from this quantitative analysis.
C.1. General methodology and assumptions
To estimate return for some trajectory τ i , we do not fully ground it as a sequence of state-action pairs. As we show in the remainder of this appendix section, the exact state-action sequence is not needed for estimating certain trajectories' returns under these reward functions.
For calculating reward per time step, we adhere to the following methodology.
• To determine the return/utility for a successful portion of a trajectory, we assume no unnecessary penalties are incurred (e.g., driving on the sidewalk).
• For positive attributes of reward, we choose the value that gives the maximum outcome. If the maximum is unclear, we choose outcomes for attributes that are as good or better than their best-reported experimental results. Lastly, if experimental results do not include a measure of the reward attribute, we attempt to calculate a value that is better than what we expect a typical human driver to do.
• Path lengths are the given length of the paper's driving tasks or our estimation of it. If the given information is insufficient to estimate the path length, we assume it to be 1 km.
For the papers paper below, we write out the units for each term in the equations for the return of a trajectory. We encourage the reader to refer to the paper's corresponding subsection in Appendix A to understand our calculations of return. Additionally, to aid the reader, we use specific colors for terms expressing the time limit, path length, time step duration, and speed. For reward functions that are sums of attributes, in our return calculations we maintain the order of the attributes as they were described in Appendix A, and we include 0 terms for attributes that do not affect return for the corresponding trajectory.
C.2. Assumptions and calculations for each paper
LeTS-drive: driving in a crowd by learning from tree search [8] Driving maps are 40 m × 40 m and are each a single intersection or curve, and from this map size we assume the path length is 40 m. This assumption appears reasonable because the car is spawned in a random location.
For the successful path τ succ and the successful portion of path τ crash , we use mean task time and the number of deceleration events reported for their best algorithm. The motionless τ idle involves time running out, which is not actually a termination event in their method but is treated so here. The driving speed when collision occurs is assumed to be the mean speed for LeTS-Drive calculated from their Table 1 Calculation of the indifference point: Model-free deep reinforcement learning for urban autonomous driving [9] We assume a constant speed of 5 m/s (18 km/h), which is the most reward-giving speed for the speed-based reward attribute. This paper focuses on a specific roundabout navigation task, which presumably would be a shorter route than those used in the more common CARLA benchmarks first established by Dosovitskiy et al. [13]. Accordingly, differing from our assumptions for most other CARLA evaluations, we assume a 0.125 km path length, the distance which can be achieved at the above speed in exactly half of the permitted 50 s time limit. We further assume that the steering angle is always 0 (avoiding a penalty) and that the agent never leaves its lane except upon a collision. Also, recall that reward is accrued at 100 ms time steps (whereas discounting is applied at 400 ms time steps). Path length: 0.125 km. A trajectory that is successful until collision: CARLA: an open urban driving simulator [13] For a successful drive, we assume a 1 km path, a change in speed from start to finish is 60 km/h, and that no overlap occurs with sidewalk or other lane. For a trajectory with a collision, we assume the collision damage is total (i.e., 1) and the ego vehicle completely overlaps with sidewalk or other lane at 60 km/h. Dynamic input for deep reinforcement learning in autonomous driving [21] Because a collision appears impossible in this task, this reward function was not involved in the analysis of preference orderings and indifference points.
Navigating occluded intersections with autonomous vehicles using deep reinforcement learning [23] We assume that the path is 6 lane widths, which is roughly what the "Left2" turn requires and that each lane is 3.35 m wide (based on 11 feet appearing common enough, e.g., in https://mutcd .fhwa .dot .gov /rpt /tcstoll /chapter443 .htm). For the successful drive, we assume 4 s was required. We focus on the unoccluded scenario with a 20 s time limit.
Path length: Calculation of km per collision at the indifference point: End-to-end race driving with deep reinforcement learning [24] Note that the domain is a car-racing video game, so safety constraints differ from autonomous driving. For successful driving, we assume 72.88 km/h, which is the average speed reported, and a 9.87 km track. We assume that the ego vehicle's heading is always aligned with the lane and the car is always in the center of the lane. CIRL: controllable imitative reinforcement learning for vision-based self-driving [29] For successful drives, we assume a 60 km / h speed that is within speed limit. We assume no penalties are incurred. When a collision occurs, we assume overlap the opposite-direction lane for 1 s (which has an equivalent impact as overlap with the sidewalk for 1 s), a 60 km / h speed that is within the speed limit, no steering-angle penalty, and collision with a vehicle specifically. As for other CARLA-based research, we assume a 1 km successful trajectory, which creates a 6 minute time limit. Path length: 1 km. A trajectory that is successful until collision: Deep distributional reinforcement learning based high-level driving policy determination [32] For a successful trajectory and the successful portion of the trajectory with a collision, we assume 17 overtakes per km (based on "Distance" in meters and "Num overtake" statistics shown in Fig. 7 in their paper and assuming those statistics are taken from a good trajectory) and an average of 1 lane change per overtake. We also assume that the car is always driving at 80 km / h, the speed that accrues the most reward. Since the minimum speed is 40 km / h and stopping in this paper's task-highway driving-would be unsafe, we instead assume the vehicle can decline to be deployed for 0 return. Since the first author did not know the duration of a time step in simulator time, we assume a common Unity default of 30 frames per second (and therefore 30 time steps per second) and that Unity processing time equals simulator time; consequently, 0.033 s time steps are assumed. We also assume a 1 km path, since the first author also did not have access to the path length for the task. Learning hierarchical behavior and motion planning for autonomous driving [53] We assume a 1 km path length and a speed of 60 km / h (or 16.67 m / s), as we do for most other CARLA evaluations. We also assume that the desired speed v ref is always 60 km / h, making the first per-time-step component result in 0 reward each step, and that high-level RL time steps last exactly 1 s, which is their reported mean duration. Lastly, we assume that no other vehicles ever are closer than 20 m from the ego vehicle. Because the path length is assumed to be 1 km, the time limit is 360 s (based on the time limit information in Appendix A). End-to-end model-free reinforcement learning for urban driving using implicit affordances [48] As we do for other CARLA evaluations, we assume a 1 km path length. We assume a speed of 30 km / h, based on the first author's report that 40 km/h was their maximum speed. We also assume termination occurs upon reaching some destination, which is only true during testing, since training involves driving until termination by failure. No additional reward is given at such successful termination. For τ crash and τ succ , we assume that the ego vehicle is always moving at the speed, location, and heading. We also assume that the 0-speed termination condition is not applied immediately but rather at 10 s and later; this assumption is made to avoid terminating at the starting time step, when the vehicle is spawned with a speed near 0 km / h. Path length: 1 km.
A trajectory that is successful until collision: | 16,654.4 | 2021-04-28T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
SESNet: sequence-structure feature-integrated deep learning method for data-efficient protein engineering
Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (< 50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein. Supplementary Information The online version contains supplementary material available at 10.1186/s13321-023-00688-x.
Introduction
Proteins are workhorses of the life activities.Their various functions such as catalysis, binding, and transportation undertake most of the metabolic activities in cells.In addition, they are the key components of the cytoskeleton, supporting the stable and diverse form of organisms.Nature provides numerous proteins with great potential value for practical applications.However, the natural proteins often do not have the optimal function to meet the demand of bioengineering.Directed evolution is a widely used experimental method to optimize proteins' functionality, namely fitness, by employing a greedy local search to optimize protein fitness [1,2].During this process, gain-of-function mutants are achieved and optimized via mutating several Amino Acids (AA) in the protein, which were selected and accumulated through the iterative processes of mutation by testing hundreds to thousands of variants in each generation.Despite the great success directed evolution has achieved, the phase space of the protein fitness landscape can be screened by this method is rather limited.Furthermore, to acquire a mutant of excellent fitness, especially a high-order mutant with multiple AA being mutated, the directed evolution often needs to develop an effective highthroughput screening or conduct a large number of experimental tests, which is experimentally and economically challenging [3].
Since experimental screening for directed evolution is largely costing, particularly for high-order mutations, prediction of the fitness of protein variants in silico are highly desirable.Recently, deep learning methods have been applied for predicting the fitness landscape of the protein variants [2].By building models trained to learn the sequencefunction relationship, deep learning can predict the fitness of each mutant in the whole sequence space and give a list of the most favorable candidate mutants for experimental tests.Generally, these deep learning models can be classified into protein language models [4][5][6][7][8][9][10][11], learning the representations from the global unlabeled sequences [6,7,12] and multiple sequence alignment (MSA) based model, capturing the feature of evolutional information within the family of the protein targeted [13][14][15][16].And more recent works have proposed to combine these two strategies: learning on evolutionary information together with global natural sequences as the representation [17,18], and trained the model on the labelled experimental data of screened variants to predict the fitness of all possible sequences.Nevertheless, all these models are focused on protein sequence, i.e., using protein sequence as the input of the model.Apart from sequence information, protein structure can provide additional information on function.Due to the experimental challenge of determining the protein structure, the number of reported protein structures is orders of magnitude smaller than that of known protein sequences, which hinders the development of geometric deep learning model to leverage protein structural feature.Thanks to the dramatic breakthrough in deep learning-based technique for predicting protein structure [19,20], especially AlphaFold 2, it is now possible to efficiently predict protein structures from sequences at a large scale [21].Recently, some researches directly take the protein structure feature as input to train the geometric deep learning model, which has been proved to achieve better or similar performance in prediction of protein function compared to language models [22][23][24].However, the fused deep-learning method which can make the use of both sequence and structural information of the protein to map the sequence-function is yet much to be explored [25].
Recently, both supervised and unsupervised models have been developed for protein engineering, i.e., prediction of the fitness of protein mutants [24,26].Generally speaking, the supervised model can often achieve better performance as compared to the unsupervised model [26], but the former requires a great amount (at least hundreds to thousands) of experimental mutation data of the protein studied for training, which is experimentally challenging [18].In contrast, the unsupervised model does not need any of such experimental data, but its performance is relatively worse, especially for the high-order mutant, which is often the final product of a direct-evolution project.It is thus highly desirable to develop a deep-learning algorithm, which can efficiently and accurately predict the fitness of protein variants, especially the high-order mutant, without the need of a large size of experimental mutation data of the protein concerned.In the present work, we built a supervised deep learning model (SESNet), which can effectively fuse the protein sequence and structure information together to predict the fitness of variant sequences (Fig 1A).We demonstrated that SESNet outperforms several state-of-the-art models on 26 metagenesis datasets.Moreover, to reduce the dependence of the model on the quantity of experimental mutation data, we proposed a data-augmentation strategy (Fig 1B), where the model was firstly pre-trained using a large quantity of the low-quality results derived from the unsupervised model and then finetuned by a small amount of the high-quality experimental results.We showed that the proposed model can achieve very high accuracy in predicting the fitness of highorder variants of a protein, even for those with more than four mutation sites, when the experimental dataset used for finetuning is as small as 40.Moreover, our model can predict the key AA sites, which are crucial for the protein fitness, and thus the protein engineer can focus on these key sites for mutagenesis.This can greatly reduce the experiment cost of trial and error.
Deep learning-based architecture of SESNet for predicting protein fitness.
To exploit the diverse information from protein sequence, coevolution and structure, we fuse three encoder modules into our model.As shown in Fig 1A : the first one (local encoder) accounts for residue interdependence in a specific protein learned from evolution-related sequences [15,16]; the second one (global encoder) captures the sequence feature in global protein sequence universe [6,12]; and the third one (structure module) captures local structural feature around each residue learned from 3D geometric structure of the protein [23,24].To integrate the information of different modules, we first concatenate representations of local and global encoders and get an integrated sequence representation.This integrated sequence representation is then sent to an attention layer and becomes the sequence attention weights, which will be further averaged with the structure attention weights derived from structure module, leading to the combined attention weights.Finally, the product of combined attention weights and the integrated sequence representation is then fed into a fully connected layer to generate the predicted fitness.The combined attention weights can also be used to predict the key AA sites, critical for the protein fitness, details of which is discussed in the section of Method.The local encoder accounts for the inter-residue dependence in a protein learned from MSA of homologous sequences using a Markov random field [27].The global encoder captures the sequence feature in global protein sequence universe using protein language model [6].The structure module accounts for the microscopically environmental feature of a residue learned from 3D geometric structure of the protein [23,28].Schematic of data-augmentation strategy.(B): We first build a mutant library containing all of the single-site mutants and numerous double-site mutants.Then, all of these mutated sequences are scored by the unsupervised model.After that, these mutants are used to pre-train the initial model (SESNet), which will be further finetuned on a small number of low-order experimental mutational data.
SESNet outperforms state-of-the-art methods for predicting fitness of variants on deep mutation scan (DMS) datasets
We compared our supervised model against the existing state-of-the-art supervised models, ECNet [17], ESM-1b [6]; and unsupervised models, ESM-1v [9], ESM-IF1 [23] and MSA transformer [15].As can be seen in Fig 2A, in 19 out of 20 datasets, the supervised models generally outperform the unsupervised ones as expected, and our model (SESNet) achieves the best performance among all the models.Moreover, we further explored the ability of our model to predict the fitness of higher-order variants by training it using the experimental results of the low-order variants on 6 datasets of DMS.As shown in Fig 2B&C , our model outperforms all the other models.Data in Fig 2 is presented in Supplementary Tables 1,2&3.These datasets cover various proteins and different types of functionalities, including catalytic rate, stability, and binding affinity to peptide, DNA, RNA and antibody, as well as fluorescence intensity (Table 4).While most of the datasets contain only single-site mutants, five of them involve both single-site and double-site mutants, and the dataset of GFP contains data up to 15site mutants.
All three components contribute positively to the performance of SESNet.
As described in the above architecture (Fig. 1A), our model integrates three different encoders or modules together.To investigate how much contribution each of the three parts makes, we performed ablation studies in 20 datasets of single-site mutants.Briefly, we removed each of the three components and compared the performance to that of the original model.As shown in Supplementary Table 5, the average spearman correlation of the original model is 0.672, much higher than that without local encoder (0.639), that without global encoder (0.247) and that without structure module (0.630).The ablation study reveals that all three components contribute to the improvement of model performance, and the contribution from the global encoder, which captures the sequence feature in global protein sequence universe, is the most significant.
The combined attention weights guide the finding of the key AA site.
The combined attention weights can be used to measure the importance of each AA site on protein fitness when mutated.To the first approximation, higher the attention score is, more important the AA site is.To test this approximation, we trained our model on the experimental data of 1084 single-site mutants in the dataset of GFP [29], a green fluorescent protein from Aequorea victoria.The ground truth of the key sites of GFP are defined here as the experimentally discovered top 20 sites, which exhibit the largest change of protein fitness when mutated, or the AAs forming and stabilizing the chromophore, which are known to significantly affect the fluorescent function of the protein [30], but lack the fitness results in the experimental dataset.Indeed, one can observe that, at least 4 out of 7 top attention-score AA sites predicted by our model are the key sites as two of them (AG65 and T201) are located at the chromophore, and the other two (P73 and R71) were among the top 20 residues discovered in experiment to render the highest change of fitness when mutated (Fig 3A To further verify this discovery, we also performed these tests on the dataset of RRM, the RNA recognition motif of the Saccharomyces cerevisiae poly(A)-binding protein [31].The key sites of RRM are defined as the experimentally discovered top 20 sites, which render the largest change of fitness of the protein when mutated, or the binding sites, which are within 5 Å of the RNA molecules as revealed in the structure of PDB 6R5K.The results in Fig. 3 demonstrate that the structural module which learns the microscopically structural information around each residue makes important contribution to identify the key AAs, which are crucial for the protein fitness.Although the ablation study (Supplementary Table 5) reveals that the addition of the structural module improves the average spearman correlation over 20 datasets only by 4 percent, Fig. 3 demonstrates an important role of the structural module, which can guide the protein engineer to identify the important AA sites in a protein for mutagenesis .
Data-augmentation strategy boosts the performance of the fitness prediction when finetuned by a small size of labelled experimental data.
Supervised model is normally performing better than the unsupervised models (see Fig. 2) [26].But the accuracy of the supervised model is highly affected by the amount of input experimental results used for training.However, it is experimentally challenging and costly to generate sufficient data (many hundreds or even thousands) for such purpose on every protein studied.To address this challenge, we propose a simple strategy of data augmentation by using the result generated by one unsupervised model to pre-train our model on a given protein, and then finetuning it using a limited number of experimental results on the same protein.We call it a pre-trained model.We note that data-augmentation strategy has been applied in various earlier work and has achieved good success in protein design [23,32,33].In particular, to improve the accuracy of inverse folding, ref [23] used 16153 experimentally determined 3-D structures of proteins and 12 million structures predicted by the AlphaFold 2 [19] to train the model ESM-IF1 [23] .In the present work, the data augmentation strategy is used for a different purpose that it can reduce the dependence of the supervised model on the size of the experimental data when predicting the fitness of protein mutants.We took GFP as an example to illustrate our data-augmentation strategy as GFP has a large number of experimental data for testing, particularly the experimental data for highorder mutants (up to 15-site mutant).We used the fitness results of low-order mutants predicted by the unsupervised model, ESM-IF1, to pre-train our model.The pretraining dataset contains the fitness of all single-site mutants and 30,000 double-site mutants randomly selected out of tens of million double-site variants.Then, we finetuned the pre-trained model by a certain number of experimental results of singlesite mutants.The resulting model was used to predict the fitness of high-order mutants.As can be seen in Fig. 4A-D, when comparing with the original model without pretraining (blue bars), the performance of the pre-trained model is significantly improved (red bars).Such improvement is particularly large when only a small number of experimental data (40) is fed for training, and it will be gradually reduced when feeding more experimental data, eventually disappearing when more than 1000 experimental data were used for training.Here, we would like to particularly highlight the case when the finetuning experimental dataset contains only 40 data points.As can be seen in Fig. 4A, the pretrained model can achieve high spearman correlation of 0.5-0.7 for multisitemutants, even for high-order mutants with 5-8 mutation sites.This is remarkably important for most protein engineers, as such experimental workload (40 data points) is generally affordable in an ordinary biochemical research group.However, without pre-training, the performance of the supervised model is rather low (~0.2).This comparison demonstrates the advantage of the data augmentation strategy proposed in the present work.
Moreover, we also compared the performance of the pretrained model with respect to the unsupervised model (green bars), which were used for generating the low-quality pretraining datasets.As can be seen, when only 40 experimental data were used for training, the pretrained model has similar performance as compared to the unsupervised model for low-order mutants (< 4 mutation sites), but clearly outperforms the latter for high-order mutants (>4 mutation sites).When feeding more experimental data, especially a couple of hundreds, the pretrained model will outperform the unsupervised model regardless of how many sites of the protein were mutated.
The unsupervised model used for analysis in Fig. 4 is ESM-1F1, which captures the local structural information of a residue.To demonstrate the general superiority of data-augmentation strategy proposed here, we also tested the results using other unsupervised model to generate the augmented datasets for GFP.As can be seen in Fig. S3, we used ProGen2 [8], an unsupervised model to learn the global sequence information, for data augmentation, and still derived the similar conclusion as in Fig. 4.That is, the pretrained model outperforms the original model without pretraining especially when a small experimental dataset is used for training, and it also beats the unsupervised model particularly for the high-order mutants.
To further validate the generality of the data augmentation strategy proposed here, we did the analysis on the dataset of other proteins: toxin-antitoxin complex (F7YBW8) [34]containing data up to 4 sites mutants, and Adeno-associated virus capsids (CAPSD_AAV2S) [35], a deep mutational dataset including data up to 23-site mutants.We used the unsupervised model ProGen2 [8] to generate the low-quality data of F7YBW8 for pretraining, since we found ProGen2 performs better than ESM-IF1 on this dataset.As shown in Fig 5A, the pre-trained model outperforms both the original model without pretraining and the unsupervised model in the fitness prediction of all multi-site mutants (2-4 sites) after finetuned by using only 37 experimental data points.In addition, in the dataset of CAPSD_AAV2S (Fig 5B ), the pre-trained model also achieves the best performance in all of the high-order mutants ranging from 2 to 23 sites, when finetuned by only 20 experimental data points.These results further support the practical use of our data augmentation strategy, as the required experimental effort is largely affordable on most proteins.
Learned models provide insight into protein fitness.
SESNet projects a protein sequence into a high dimensional latent space and represents each mutant as a vector by the last hidden layer.Thus, we can visualize the relationships between sequences in these latent spaces to reveal how the networks learn and comprehend protein fitness.Specifically, we trained SESNet on the experimental data of single-site mutants from the datasets of GFP and RRM, then we used the trained model and untrained model to encode each variant and extracted the output of the last hidden layer as a representation of the variant sequence.Fig S4 shows a twodimensional projection of the high dimensional latent space using t-SNE [36].We found that the representations of positive and negative variants, i.e., the experimental fitness values being larger or smaller than that of wildtype, generated by the trained SESNet are clearly clustered into distinct groups (Fig S4A Furthermore, to explore why the data-augmentation strategy works, we performed a case study on GFP dataset.Here, we compared the latent-space representation from the last hidden layer generated by our model with and without pre-training using the augmented data from the unsupervised model.As seen in Fig. S5A, after pretraining even without finetuning by the experimental data, SESNet can already roughly distinguish the negative and positive mutants.One thus can deduce that the pre-training can furnish a good parameter initialization for SESNet.After further finetuning the pretrained SESNet by only 40 experimental data points of single-site mutants, a rather clear boundary between negative and positive high-order mutants is further outlined (Fig S5B).In contrast, when we skipped the pretraining process, i.e., directly training the model on 40 experimental data points, the separation between the positive and negative high-order mutants is rather ambiguous (Fig S5C).This comparison demonstrates the superiority of our data-augmentation strategy in distinguishing mutants of distinct fitness values, when the number of available experimental data is limited.Here, our model and other supervised models were trained on the data of single-site mutants.We used 10% of double-site mutants as validation set and the remaining 90% as test set.C: Comparison of our model to other models on fitness prediction of quadruple-site mutants of GFP.Here, our model and other supervised model were trained using the single, double, triple-site mutants and all the three together.We used 10% of quadruple-site mutants as validation set and the remaining 90% as test set.The error bar in single-site mutant was got from the five-fold cross-validation.Since we cannot do fivefold cross-validation in the fitness prediction of high-order mutants trained on low-order mutants, we don't put error bar for those data.
Discussion
In this study, we present a supervised deep learning model, which leverages the information of both sequence and structure of protein to predict the fitness of variants.And this model is found to outperform the existing state-of-the-art ones for protein engineering.Moreover, we proposed a data augmentation strategy, which pretrains our model using the results predicted by other unsupervised model, and then finetunes the model with only a small number of experimental results.We demonstrated that such data augmentation will significantly improve the accuracy of the model when the experimental results are very limited (~40), and also for high-order mutants with >4 mutation sites.We noted that our work, especially the data-augmentation strategy proposed here, will be of great practical importance as the experimental effort it requires is generally affordable by an ordinary biochemical research group and can be applied on most protein.
Method Details of Model Architecture
Local encoder.Residue interdependencies are crucial to evaluate if a mutation is acceptable.Several models, including ESM-MSA-1b [37], DeepSequence [14], EVE [38] and the Potts model [27], such as EVmutation [16] and ECNet [39], utilize multiple sequence alignment (MSA) to dig the constraints of evolutionary process in the residues level.In the present work, we use Potts model to establish the local encoder.This method first searches for the homologous sequences and builds MSA of the given protein with HHsuite [40].After that, a statistical model is used to identify the evolutionary couplings by learning a generative model of the MSA of homologous sequences using a Markov random field.In the model, the probability of each sequence depends on an energy function, which is defined as the sum of single-site constraints and all pairwise coupling constraints : () = ∑ ( ) + ∑ ( , ) ≠ (1) Where and are position indices along the sequence.The i-th amino acid xi is encoded by a vector, in which elements are set to the single-site term ei(xi) and pairwise coupling terms eij(xi, xj) for j=1,…,n, n is the number of residues in the sequence.These coupling parameters ei and eij can be estimated using regularized maximum pseudolikelihood algorithm [41,42].As the result, each amino acid in the sequence is represented by a vector whose length is ( + 1) , and the whole input sequence is encoded as a matrix whose size is ( + 1) × .Since the length of the local evolutionary representation of each amino acid is close to the length of the sequence, the ( + 1)-long vector would be transformed into a new vector with fixed length (in our local encoder, =128) through a fully connected layer to avoid the overfitting issue.Sequence of protein would also pass a Bi-LSTM layer and be transformed into an × matrix for random initialization.By concatenating two matrices above, we obtain the output of local encoder ′ =< ′ , ′ , … ′ >, whose size is × 2 .
Global Encoder.Recently, the large scale pre-trained models have been successfully applied in diverse tasks for inferring protein structure or function based on sequence information.Such as prediction of secondary structure, contact prediction and prediction of mutational effects.Thus, we take a pre-trained protein language model as the global encoder which is responsible to extract biochemical properties and evolution information of the protein sequences.There are some effective language models such as UniRep [12], TAPE [43], ESM-1v [44], ESM-1b [37], ProteinBERT [11] etc. We test these language models on our validation datasets, and results show that ESM-1b performs better than others.Therefore, we chose to use ESM-1b as the global encoder.
The model is a bert-based [45]
Structure module.
Structure module utilizes the microenvironmental information to guide the fitness prediction.In this part, we use the ESM-IF1 model [23] to generate the scores of mutant sequences, which evaluate their ability to be folded to the wildtype structure of the given protein.Higher scores mean these mutations are more favorable than others.Specifically, all possible single mutants at each position of a sequence would obtain the corresponding scores.The prediction sequence distribution is an ( × 20) matrix.Then we calculated the cross-entropy at each position of the sequence between the matrix above and one-hot encoding matrix of mutant sequence.After passing the results through a SoftMax function, we obtained an ( × 1) output vector, which is the reconstruction perplexities ′ =< 1 ′ , 2 ′ , … ′ > align the evolutionary sequence.In the present work, we do not directly encode distance map or the 3D coordinate of mutated protein.Since before that encoding process, we need to fold every specific mutant from their sequences, which will lead to unaffordable computational cost and is unpractical for the task of fitness prediction.as the embedding vector of the entire sequence.
Intra-Attention
Output layer.The input of output layer is the context vector from the output of attention aggregator, and an evolutionary score from the unsupervised model [23].While the evolutionary score may not be trusted in many cases, we use a dynamic weight to take the score into account.The context vector was firstly transformed to a hidden vector , where = ( ℎ + ) , ℎ and are learnable parameters, and ReLU [47] is the activation function.Then, the hidden vector is used to calculate the weight ∈ (0,1) on : = ( [; ]).The scale of quantifies how much should the model trust the score from the zero-shot model.At last, we use a linear layer to compute a fitness score ∈ according to the hidden vector directly, where = ℎ + .The output of our model, i.e., the prediction fitness ∈ is computed as: = (1 − ) × + × .
(2) We utilize the mean square error (MSE) as the loss function to update model parameters during back-propagation: , where is the number of samples in a mini-batch, is the target fitness and is the output fitness.
Dataset and experimental settings
Benchmark dataset collection.We first collected 20 multiple deep mutational scanning datasets from Ref [14].Most of them only contain the fitness data of single-site mutants, while one of them (RRM) [31] also provides data of high-order mutants.The fitness data measured in these datasets include enzyme function, growth rate, peptide binding, viral replication and protein stability.We also collected the mutant data of the WW domain of human Yap1, GB1 domain of protein G in Streptococcus sp. group G and FOS-JUN heterodimer from Ref [48], and the prion-like domain of TDP-43 from Ref [49] to evaluate the ability of our model to predict the effect of double-sites mutant by learning from the data of single-site mutant.Besides, the ability to predict the fitness of higher order mutants (larger than 2) is tested in the dataset from Ref [29].This study analyzed the local fitness landscape of the green fluorescent protein from Aequorea victoria (avGFP) by measuring the native function (fluorescence) of tens of thousands of derivative genotypes of avGFP.The detailed information on these datasets are provided in Table 4 in the Supplement Information.
Prediction of single-site mutation effects.We compared our model to ECNet, ESM-1b, ESM-1v and MSA transformer model on the DMS datasets.For the supervised models (ECNet and ESM-1b), we performed five-fold cross-validation on these datasets, and 12.5% of each train set are randomly selected as valid set.Spearman correlation was used to evaluate the performances of different models.
Prediction of High-order mutation effects.We evaluated the performance for predicting the fitness of high-order mutants by the model trained on low-order mutants.
The training set for the prediction of double-site mutants only contains the experimental fitness of single-site mutants.The models used to predict the fitness of quadruple mutants of avGFP are trained on single, double, triple, and all the three types of mutants, respectively.Both in the prediction of effect of double mutants and quadruple mutants, we chose 10% of the high-order mutant data as valid set.The performances of models were evaluated by Spearman correlation.
Data-augmentation strategy.The data augmentation was conducted by pre-training our model on the results predicted by the unsupervised model.To be specific, we first built a mutant library, which contains all of the single-site mutants and 30,000 double-site mutants randomly selected from tens of millions of saturated double-site mutants.Then, we used ESM-IF1 (or ProGen2) to score all of these sequences.Those sequence-score data were used to pre-train our model.While we used 90% of the data as training test, 10% as validation set.After that, we finetuned the pre-trained model on single-site mutants from experiment with the high-order mutants as test set.
Training details.SESNet was trained with adam optimizer with weight decay (equals to L2 norm).Hyperparameters of the model were tuned with a local grid search on the validation set.Since conducting 5-fold cross-validation and grid search on 20 datasets is costly, we only searched on two representative datasets.We performed grid search on GFP dataset for multi-sites dataset and RRM dataset for single-site dataset to obtain the best hyperparameters configuration and apply the search results in other datasets.We tested the hidden size of [128,256,512], learning rate of [1e-3, 5e-4, 1e-4, 5e-5, 1e-5], and dropout of [0.1, 0.2, 0.4].Table 7 in SI shows the details of the hyperparameters configuration.All experiments are conducted on a GPU server with 10 RTX 3090 GPUs (24GB VRAM) and 2 Intel Gold 6226R CPUs with 2TB RAM.
Model contrast.
The source code of ECNet model for contrast is downloaded from the GitHub website (https://github.com/luoyunan/ECNet)provided by Ref [17].The ESM-1b model is also reproduced in our local computers with architecture that is described in their publication [6].The code of ESM-IF1, ESM-1v and MSA transformer (ESM-MSA-1b) are got from the GitHub website of Facebook research (https://github.com/facebookresearch/esm).For each assay, all experiments of three different models are performed in the same dataset.
Figure 1 .
Figure 1.Architecture of model and the schematic of data-augmentation strategy.Architecture of SESNet (A):The local encoder accounts for the inter-residue dependence in a protein learned from MSA of homologous sequences using a Markov random field[27].The global encoder captures the sequence feature in global protein sequence universe using protein language model[6].The structure module accounts for the microscopically environmental feature of a residue learned from 3D geometric structure of the protein[23,28].Schematic of data-augmentation strategy.(B): We first build a mutant library containing all of the single-site mutants and numerous double-site mutants.Then, all of these mutated sequences are scored by the unsupervised model.After that, these mutants are used to pre-train the initial model (SESNet), which will be further finetuned on a small number of low-order experimental mutational data.
Fig S1A).Interestingly, when we removed the structure module from the model, only one residue in the predicted top-7 attention-score AA is the key site (Fig 3B and Fig S1B).
Fig 3C and Fig S2A show that 4 out of 7 top attention-score AA sites predicted by our model are the key AAs.One of them (I12) is among the top 20 residues and three of them (N7, P10 and K39) are binding sites.Whereas, no key residue can be found in the predicted top-seven attention-score AAs, when we removed the structure module.(Fig 3D and Fig S2B).
Fig S4B).In contrast, the representations from untrained model cannot provide a distinguishable boundary between positive and negative variants (Fig S4C and Fig S4D).Therefore, SESNet can learn to distinguish functional fitness of mutants into a latent representation space with supervised training.
Figure 2 .
Figure 2. Spearman correlation of predicted fitness.A: Comparison of our model to other models on the predicted fitness of the single-site mutants on 20 datasets.We performed five-fold crossvalidation with 7:1:2 as the ratio of train versus validation versus test set.B: comparison of predicted fitness of double-site mutants of our model to other unsupervised models (ESM-1v, ESM-IF1 and MSA transformer), or supervised models (ECNet and ESM-1b).Here, our model and other supervised models were trained on the data of single-site mutants.We used 10% of double-site mutants as validation set and the remaining 90% as test set.C: Comparison of our model to other models on fitness prediction of quadruple-site mutants of GFP.Here, our model and other supervised model were trained using the single, double, triple-site mutants and all the three together.We used 10% of quadruple-site mutants as validation set and the remaining 90% as test set.The error bar in single-site mutant was got from the five-fold cross-validation.Since we cannot do fivefold cross-validation in the fitness prediction of high-order mutants trained on low-order mutants, we don't put error bar for those data.
Figure 3 .
Figure 3.The sites with the top 7 largest attention scores on the wildtype sequence.A&B:The key sites of GFP have been marked as red spheres.A: 4 key sites were recovered by our model.G65 and T201 are the active residues helping to form and stabilize the chromophore in GFP as described by Ref[30].P73 and R71 are among the experimentally-discovered top 20 sites, which render the highest change of fitness when mutated.B: Only one key site was identified by the model when removing the structure module and it is Y37, which is among the experimentally-discovered top 20 AA sites.C&D: The key sites of RRM have been marked as red spheres.C: 4 key sites were recovered by the original model.N7, P10 and K39 are the binding sites which are within 5Å of the RNA molecules.I12 is among the experimentally-discovered top 20 sites, which render the highest change of fitness when mutated.D: There is no key site identified by the model when removing the structure module.
Figure 4 .
Figure 4. Results of models trained on different number of experimental variants.A-D: The spearman correlation of fitness prediction on multiple sites (2-8 sites) mutants by finetuning using 40, 100, 400, 1084 single-site experimental mutation results from dataset of GFP.Where the red and blue bars represent the results of the pre-trained model and the original model without pretraining, respectively.And the green bars correspond to the results of the unsupervised model ESM-IF1 as a control.
Figure 5 .
Figure 5. Results of models trained on different datasets.A-B: The spearman correlation of fitness prediction on high-order mutants by finetuning on 37 experimental single-site mutation results from datasets of F7YBW8 and on 20 experimental single-site mutation results of CAPSD_AAV2S, respectively.Where the red and blue bars represent the results of the pre-trained model and the original model without pretraining.And the green bars correspond to the results of the unsupervised model, which is ProGen2 for F7YBW8 and ESM-IF1 for CAPSD_AAV2S, respectively.
context-aware language model for protein, trained on the protein sequence dataset of UniRef 50 (86 billion amino acids across 250 million protein sequences).Due to its ability to represent the biological properties and evolutionary diversity of proteins, we utilize this model as our global encoder to encode the evolutionary protein sequence.Formally, given a protein sequence =< , , … , > ∈ as input, where is the one-hot representation of amino acids in the evolutionary sequence, is the length of the sequence, and is the size of amino acids alphabet.The global encoder first encodes each amino acid and its context to =< , , … , > , where ∈ , (in ESM-1b, = 1420 ).Then is projected to ′ of a hidden space with a lower dimension (in our default model configuration, ℎ = 256), ′ = + , where ∈ × is a learnable affine transform parameter matrix and ∈ is the bias.The output of global encoder is ′ =< ′ , ′ , … ′ > ∈ × .We integrate the ESM-1b architecture into our model i.e.; we update the parameters of ESM-1b dynamically during the training process.
.
[46]outputs of local encoder and global encoder are embedding vectors, aligning all positions of input sequence.We utilize intra-attention mechanism to compress the whole embeddings to a context vector.The inputs of attention layer are: Firstly, the local representations and global representations are normalized by layer normalization[46]over the length dimension respectively for stable training.That is, ′ = ( ′ ) and ′ = ( ′ ) .Secondly, the normalized global representations and local representations are concatenated to jointrepresentations =< , , … >, where = [ ′ ; ′ ] ∈ .Then we use an dot attention layer to compute the sequence attention weights =< 1 , 2 , … , > ∈ , where ∈ is the attention weight on the ℎ position, = ∈ ℎ×1 is the learnable parameter.Besides the sequence attention weights, there is structure attention weights called structure attention =< 1 , 2 , … , > ∈ , which are calculated by reconstruction perplexities, = | 8,422.4 | 2022-12-29T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Turing mechanism underlying a branching model for lung morphogenesis
The mammalian lung develops through branching morphogenesis. Two primary forms of branching, which occur in order, in the lung have been identified: tip bifurcation and side branching. However, the mechanisms of lung branching morphogenesis remain to be explored. In our previous study, a biological mechanism was presented for lung branching pattern formation through a branching model. Here, we provide a mathematical mechanism underlying the branching patterns. By decoupling the branching model, we demonstrated the existence of Turing instability. We performed Turing instability analysis to reveal the mathematical mechanism of the branching patterns. Our simulation results show that the Turing patterns underlying the branching patterns are spot patterns that exhibit high local morphogen concentration. The high local morphogen concentration induces the growth of branching. Furthermore, we found that the sparse spot patterns underlie the tip bifurcation patterns, while the dense spot patterns underlies the side branching patterns. The dispersion relation analysis shows that the Turing wavelength affects the branching structure. As the wavelength decreases, the spot patterns change from sparse to dense, the rate of tip bifurcation decreases and side branching eventually occurs instead. In the process of transformation, there may exists hybrid branching that mixes tip bifurcation and side branching. Since experimental studies have reported that branching mode switching from side branching to tip bifurcation in the lung is under genetic control, our simulation results suggest that genes control the switch of the branching mode by regulating the Turing wavelength. Our results provide a novel insight into and understanding of the formation of branching patterns in the lung and other biological systems.
Introduction
The Mammalian lung is a striking example of organs that develop through branching morphogenesis. During lung morphogenesis, two primary forms of branching, side branching and tip bifurcation, which occur in sequence, have been identified [1]. The switch of branching mode from side branching to tip bifurcation is postulated to be under genetic control [1,2].
To investigate how genes work to generate these patterns, a mathematical model [3] derived from the Gierer-Meinhardt activator-inhibitor model [4] was used in our previous a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 study [5]. We demonstrated a mechanism through which the interaction of biological morphogens creates branched structures in the lung. The cascades of branching forms that have been observed in the lung, including side branching and tip bifurcation, were successfully reproduced by the branching model. Although the biochemical mechanism-the interaction of morphogens-provides an elegant explanation of lung branching morphogenesis, the mathematical mechanism underlying the branching patterns needs to be further investigated. For example, the branching mode switch between side branching and tip bifurcation can be controlled by a key parameter related to consumption by cells in the simulation of the model; however, it is not easily explained by the interaction of morphogens. Mathematical studies focus on the dynamical behaviors of mathematical models [6][7][8][9]. However, there is lack of bridge between branching morphogenesis and mathematical mechanism. Based on the branching model, we investigate the mathematical mechanism underlying lung branching pattern formation in this paper.
In our previous study of the dynamics of side branching and tip bifurcation [10], we showed that Turing instability occurs in the branching patterns. Turing instability can induce spatial patterns in the models, such as spots, stripes, hole patterns, and more complicated patterns, which is applied to modeling biological patterning phenomena in fish skin, terrestrial vegetation, sea shells, and others [11][12][13][14]. To reveal the mathematical mechanisms underlying branching patterns, we conducted Turing instability analysis.
In this paper, we decoupled an activator-inhibitor model from the branching model and performed simulations of the two models to obtain Turing patterns and branching patterns. Our simulation results show that Turing instability occurs at the growing tip of the branching patterns. The Turing patterns underlying the branching patterns are spot patterns. The spot patterns are in the form of concentration peaks, leading to branching patterns, with a local activator concentration peak formed and moving ahead of the growing tips. This indicates that the local morphogen concentration peak plays a key role in the growth of branching. Furthermore, a sparse spot pattern underlies the tip bifurcation patterns, while a dense spot pattern underlies the side branching patterns. The dispersion relation analysis shows that the wavelength of the spot patterns acts on the branching structures. As the wavelength decreases, the spot patterns change from sparse to dense, the rate of tip bifurcation decreases and side branching eventually occurs instead. A sufficient wavelength is required for the occurrence of tip bifurcation, while an insufficient wavelength provides favorable conditions for side branching. Our results suggest that genes control the branching structures in the lung by regulating the Turing wavelength. Our results provide a fresh insight into and understanding of the formation of branching patterns in the lung and other biological branching systems.
Methods
The branching model we used in this paper is defined by Eqs (1)-(4). The four variables in the model equations are: activator A, inhibitor H, substrate S, and cell differentiation state Y.
In the branching model, the term cA 2 S H in the first equation represents that activator A is upregulated by itself in autocatalytic reaction kinetics at rate c, with the dependence on substrate S, and inhibited by inhibitor H. cA 2 S in the second equation represents the catalyzed effect of H by A. ρ A Y and ρ H Y represent A and H are secreted by differentiated cells Y at rates ρ A and ρ H . c 0 in the third equation represents S is produced at rate c 0 . −εYS represents S is consumed by differentiated cells Y at a rate ε. dA þ Y 2 1þfY 2 represents a high A-concentration induces irreversible cell differentiation (the Y-concentration goes from low to high). −μA, −vH, −γS, and −eY represent A, H, S, and Y decay in a first-order reaction at rates μ, v, γ, and e. A, H, and S are assumed to be diffusible, with diffusion coefficients D A , D H , and D S , respectively.
This branching model is derived by an activator-inhibitor system (Eqs 1 and 2) with dependence on the substrate. Thus, the branching pattern formation described in the model is divided into two processes: the formation of the local pattern on the stalk and the spatial extension of the stalk. The former is generated by the activator-inhibitor system, which exhibits Turing instability, and the latter results from the dependence of the activator-inhibitor system on the substrate. To explore the Turing instability underlying the branching model, a scheme was performed according to the following steps, as depicted in Fig 1. Step 1. Decoupling. Decouple the activator-inhibitor model from the branching model, with S and Y as the parameters. We used the decoupling method described in the literature [10].
Step 2. Calculating a crescent-shaped Turing region. Calculate the S-Y parameter space of the activator-inhibitor model for the Turing instability (see Appendix for the Turing instability analysis).
Step 3. Plotting the differentiation trajectory of a cell. Extract (S, Y) pairs of a cell in the branching system with cell differentiation, then plot an SY-curve for the differentiation trajectory of the cell.
Step 4. Turing state selection and simulation. Select a point on the trajectory within the Turing region as the values of parameters S and Y and perform simulation of the activator-inhibitor model to obtain the Turing-type pattern underlying the branching pattern.
Numerical simulation
We performed numerical simulations of both the branching model and the activator-inhibitor model to obtain the branching patterns and the related Turing patterns. For the branching model, we set the values of the parameters according to the literature [5]. For the activatorinhibitor model, simulations were performed on a 200×200 grid with periodic boundary conditions, and the parameter values and initial values of the variables were set according to the branching system. Starting from a randomly perturbed uniform initial condition, the simulation of the activator-inhibitor model was stopped when the stationary spatial structure was formed.
Turing spot patterns underlying the branching patterns
To explore the Turing patterns underlying the branching patterns, we calculated the S-Y parameter space for the Turing instability to obtain a crescent-shaped Turing region and recorded the cell differentiation trajectories of both the tip bifurcation and side branching patterns. We then performed simulations of their underlying Turing patterns. The simulation results are shown in Fig 2. For the tip bifurcation patterns (Fig 2Aa), the underlying Turing patterns are spot distributions (Fig 2Ab). The spot patterns are at points on the cell differentiation trajectories of the tip bifurcation patterns within the Turing region (Fig 2Ac).The points refer to a cell state where cells are located at the growing tips of the tip bifurcation patterns. This means the Turing instability occurs at the growing tips of the tip bifurcation patterns.
To further investigate the effect of the spot patterns on the tip bifurcation patterns, we explored their activator distribution since the activator and inhibitor interaction plays a key role in pattern formation. In Fig 2C, the tip bifurcation patterns always exist with a local activator concentration peak that is formed and moves ahead of the growing tips (Fig 2Ca and 2Cb). The spot patterns are in the form of peaks of the activator concentration (Fig 2Cc). This indicates that the Turing instability affects the branching growth. The structure of the spot pattern leads to the activator concentration peaks formed at the growing tips of the tip bifurcation patterns.
We then addressed how the Turing spot patterns underlie the tip bifurcation patterns. To interpret the mechanism, we explored the morphogen concentration in the Turing patterns because it plays an important role in cell growth [15,16]. There are other typical structures of Turing patterns in addition to spots, such as stripes and holes. Fig 3 shows the activator concentration of those patterns. Spot patterns exhibit local high concentration peaks, while stripe and hole patterns are shown in a gentle concentration distribution. The spot patterns have a much higher activator concentration gradient than the stripe and hole patterns. This indicates that a high local morphogen concentration is required for tip bifurcation growth. The high concentration peaks at the growing tips of the tip bifurcation patterns caused by the Turing instability stimulate the outward extension of the tips and outward growth of the branches.
With respect to the side branching patterns (Fig 2Ba), the underlying Turing patterns also have spot distributions (Fig 2Bb). The spot patterns are at points on the cell differentiation trajectories of the side branching patterns within the Turing region (Fig 2Bc). The points refer to the cell state where cells are located at the growing tips of the side branching patterns, which means Turing instability also occurs at the growing tips for the side branching patterns. Fig 2D shows that a local activator concentration peak is formed and moves ahead of the growing tips of the side branching patterns (Fig 2Da and 2Db), which is consistent with the spot patterns in the form of peaks of the activator concentration (Fig 2Dc). Those results are the same as the case of the tip bifurcation patterns. However, the spot patterns underlying the side branching patterns are much denser than those corresponding to the tip bifurcation patterns.
Spot density of the Turing spot pattern varies for branching structures
We observed that the structure of the Turing patterns underlying the branching patterns is spots. Furthermore, the spot density of the Turing patterns varies for the tip bifurcation and side branching patterns. Next, we investigated the connection between the spot density of the Turing spot patterns and the branching structures. In the simulation, both branching structures, tip bifurcation and side branching, can be generated by modifying a single parameter, ε (in Eq (3), the consumption rate of substrate by Y cells). In this way, we obtained the branching patterns and the underlying spot patterns, as shown in Figs 4 and 5.
Sparse Turing spot patterns underlying the tip bifurcation patterns. We set the tip bifurcation pattern shown in Fig 2Aa for a given ε. For convenience of comparison, we show the pattern in Fig 4B and its underlying spot pattern in Fig 4E. We then increased ε and obtained a tip bifurcation pattern with an increasing bifurcation rate, and the underlying spot pattern with a decreasing number of spots was observed (Fig 4A and 4D). Subsequently, we decreased ε, and another tip bifurcation pattern with a decreasing bifurcation rate were obtained, while the underlying spot pattern with an increasing number of spots was observed (Fig 4C and 4F).
For the tip bifurcation patterns, the underlying Turing patterns are sparse spot patterns. Tip bifurcation occurs at a decreasing rate with increasing number of spots in the spot patterns as ε decreases.
Dense Turing spot patterns underlying the side branching patterns. When ε is below a certain value, side branching patterns emerge rather than tip bifurcation patterns. We set the side branching pattern shown in Fig 2Ba for a given ε; we show the pattern in Fig 5B and its underlying spot pattern in Fig 5E. We then increased ε within the range for side branching patterns, and a side branching pattern was obtained with slightly increasing spatial interval between branches, while the underlying spot pattern was observed with a slightly decreasing number of spots (Fig 5A and 5D). Subsequently, we decreased ε, and another side branching pattern was obtained with a slightly decreasing spatial interval between branches and more outward growing branches, while the underlying spot pattern was observed with a the slightly increasing number of spots (Fig 5C and 5F).
For the side branching patterns, the underlying Turing patterns are dense spot patterns. As ε decreases, more side branches are produced, the spatial interval between branches decreases, and the number of spots in the underlying spot patterns increases.
Turing wavelength regulates the branching structures
Turing patterns are characterized by a critical wavelength [17]. To elucidate the phenomenon of distinct spot densities of the spot patterns underlying the tip bifurcation patterns and side branching patterns, we further explored the critical wavelength of the spot patterns by dispersion relation analysis. The dispersion relations shown in Fig 6 describe a function of Re(λ) that depends on wavenumber k, where λ is the eigenvalue with the largest real part (see Appendix for obtaining the dispersion relations). The wavelength is calculated by dividing 2π by the critical wavenumber at which the maximum value of λ occurs.
In Fig 6, we show the dispersion relations for the spot patterns underlying the branching patterns depicted in Figs 4 and 5 and present a comparison of the wavelength (2π/wavenumber) in sequence. For the tip bifurcation patterns, when the bifurcation rate decreases and the number of spots in the underlying sparse spot patterns increases (Fig 4), the dispersion relations illustrate that the wavelength decreases (Fig 6A, green curves; Fig 6B, green bars). When the branching mode switches from tip bifurcation to side branching with the underlying spot pattern change from a sparse to dense distribution, the wavelength decreases further (Fig 6A, green to orange curves; Fig 6B, green to orange bars). For the side branching patterns, when the spatial interval between branches decreases with more outward growing branches, the number of spots in the underlying dense spot pattern increases slightly, and the wavelength decreases slowly (Fig 6A, orange curves; Fig 6B, orange bars). These data suggest that as the wavelength decreases, the number of spots in the spot patterns increases, and tip bifurcation occurs at a decreasing rate. When the wavelength decreases below a certain value, side branching occurs rather than tip bifurcation.
To investigate the effect of the wavelength on the branching patterns for different Turing regions, we varied parameter ρ H (in Eq (2), inhibitor secreted by cells) to explore the different Turing regions, since ρ H is a key factor in the branching patterns in the simulation [5] and is a parameter in the activator-inhibitor model (Eqs (1) and (2)). As we have already analyzed the Turing region for ρ H = 0.0001, four Turing regions for two smaller and two larger ρ H values are selected for the analysis. Through dispersion relation analysis (Fig 8C), we show the wavelength (Fig 8D). Trends similar to those for the Turing region with ρ H = 0.0001 were observed. For the tip bifurcation patterns, when the bifurcation rate decreases and the number of spots in the underlying sparse spot patterns increases (Fig 8B), the wavelength decreases (Fig 8D, green bars). When the branching mode switches from tip bifurcation to side branching with the underlying spot patterns changing from a sparse to a dense distribution (Fig 8B), the wavelength decreases further (Fig 8D, green to orange bars). For the side branching patterns, when the spatial interval between branches decreases and more branches grow outward, the number of spots in the underlying dense spot patterns increases slightly (Fig 8B), and the wavelength decreases slowly (Fig 8D, orange bars). In addition to the similar trends, an interesting hybrid branching pattern (Fig 8B4), which mixes tip bifurcation and side branching, is generated. Both the number of spots and the wavelength of the underlying spot patterns are between those corresponding to the tip bifurcation and side branching patterns (Fig 8B4 and 8D, blue bars).
For the Turing region for ρ H = 0.00007, similar trends are observed in Fig 9. For the tip bifurcation patterns, when the bifurcation rate decreases and the number of spots in the underlying sparse spot patterns increases (Fig 9B), the wavelength decreases (Fig 9D, green bars). When the branching mode switches to side branching with the underlying dense spot patterns (Fig 9B), the wavelength decreases further (Fig 9D, green to orange bars). For the side branching patterns, when the branches grow closely and the number of spots in the underlying dense spot patterns increases slightly (Fig 9B), the wavelength decreases slowly (Fig 9D, orange bars).
Similarly, a hybrid branching pattern also emerges, and both the number of spots and the wavelength of the underlying spot patterns are between those corresponding to the tip Turing mechanism underlying a lung branching model bifurcation and side branching patterns (Fig 9B4 and 9D, blue bars). However, Fig 9B4 shows that it is not easy for side branch to emerge from the tip branching structure.
Similar trends were found in the Turing region for ρ H = 0.00013 in Fig 10. For the tip bifurcation patterns, when the bifurcation rate decreases and the number of spots in the underlying sparse spot patterns increases (Fig 10B), the wavelength decreases (Fig 10D, green bars). When the branching mode switches to side branching with a dense underlying spot pattern (Fig 10B), there is an evident decrease in the wavelength (Fig 10D, green to orange bars). For side branching patterns, when branches grow close and the number of spots in the underlying dense spot patterns increases slightly (Fig 10B), the wavelength decreases slowly (Fig 10D, orange bars).
However, no hybrid branching patterns were observed in the Turing region for 0.00013. Similar trends were observed in Fig 11 in the Turing region for ρ H = 0.00015. When the bifurcation rate in the tip bifurcation patterns decreases and the number of spots in the underlying sparse spot patterns increases (Fig 11B), the wavelength decreases (Fig 11D, green bars). When the branching mode switches to side branching with an underlying dense spot pattern (Fig 11B), the wavelength decreases greatly (Fig 11D, green to orange bars). In the side branching patterns, when branches grow close and the number of spots in the underlying dense spot patterns increases slightly (Fig 11B), the wavelength decreases slowly (Fig 11D, orange bars).
Additionally, there are no hybrid branching patterns observed in the Turing region for ρ H = 0.00015.
The simulation results demonstrate that the effects of the wavelength on the branching patterns have similar trends in different Turing regions. For the tip bifurcation patterns, when the bifurcation rate decreases and the number of spots in the underlying sparse spot pattern increases, the wavelength decreases. When the branching mode switches from tip bifurcation to side branching and the underlying spot pattern changes from a sparse to dense distribution, the wavelength decreases further. For the side branching patterns, when the spatial interval between branches decreases and the number of spots in the underlying dense spot pattern increases slightly, the wavelength decreases slowly.
Discussion
Our simulation results demonstrate that a local high morphogen concentration and the Turing wavelength play important roles in pattern formation in the branching model.
In the branching patterns, the growing tips exhibit Turing instability. We show that it is the Turing spot patterns underlying the branching patterns. The spot patterns are in the form of concentration peaks, which results in a local morphogen concentration peak formed at the tips in the branching patterns. The local morphogen concentration peak is unstable and induces tip expansion into the free space, causing branches to grow. This result is in agreement with the in vitro experimental results of Hagiwara et al [18], who showed that a high cell concentration gradient is required for cell branching in the lung.
Furthermore, we found that the spot density of the spot pattern varies for branching structures. A sparse spot pattern underlies the tip bifurcation patterns, while a dense spot pattern underlies the side branching patterns.
The dispersion relation analysis shows that the wavelength of the spot patterns affects the occurrence of tip bifurcation and side branching. For the tip bifurcation patterns, when the bifurcation rate decreases and the number of spots in the underlying sparse spot pattern increases, the wavelength decreases. When the branching mode switches from tip bifurcation to side branching and the underlying spot pattern changes from a sparse to a dense distribution, the wavelength decreases further. For the side branching patterns, when the spatial interval between branches decreases as more branches grow, the number of spots in the underlying dense spot pattern increases slightly and the wavelength decreases slowly.
The simulation results suggest that when the wavelength decreases and the number of spots in the spot patterns increases, tip bifurcation occurs at a decreasing rate. When the wavelength decreases below a certain value, the spot patterns shift to a dense distribution, no tip bifurcation occurs and side branching is observed. An insufficient wavelength impedes tip bifurcation but provides favorable conditions for side branching.
Branching patterns and the Turing patterns are two types of patterns in mathematical biology. Our work contributes to correlating the formation of branching patterns with Turing patterns. Although we demonstrate the connection between spot patterns and branching patterns, little is known about other Turing patterns, such as stripe patterns and hole patterns. The dispersion relation analysis shows that the wavelength affects the branching pattern, and the trend of how the wavelength affects the branching structures is revealed. However, the exact mechanism remains to be explored.
Nevertheless, our work reveals the Turing mechanism underlying the branching patterns. In our previous study [5], we demonstrated that the branching mode can be changed from tip bifurcation to side branching by varying the parameter ε. Our results in this paper further show that ε controls the branching mode switch because it regulates the Turing wavelength. In the experimental work, the branching mode changes during lung development are shown to be controlled by genes. Our results further suggest that the branching mode switch in the lung is a result of genes regulating the Turing wavelength, similar to a previous study [19], which found that gene modulation of digit patterning involves a Turing mechanism. Our work provides a fresh insight into and understanding of the formation of branching patterns in the lung and other biological branching systems. | 5,517.4 | 2017-04-04T00:00:00.000 | [
"Biology"
] |
MEDITERRANEAN JOURNAL OF HEMATOLOGY AND INFECTIOUS DISEASES www.mjhid.org ISSN 2035-3006 Original Articles Prognostic Significance of NRAS Gene Mutations in Children with Acute Myelogenous Leukemia
Background: NRAS mutations in hematologic malignancies, especially in those of myeloid origin. Objective: We aimed to determine the frequency of NRAS (NRAS significance in Egyptian children with acute myelogenous leukemia (AML). Subject and methods: Peripheral blood and bone marrow (BM) samples were taken from 39 de novo pediatric AML patients. Twenty subjects with matched age and sex were selected as a control group. Samples from patients an genomic PCR-SSCP method. Results: NRAS mutations at the time of diagnosis was found in 6/39 (15.4%) AML cases. Patients with NRAS mutant had no significant improved clinical outcome than patients w Patients with NRAS mutant had similar complete remission (CR) rates compared with non patients (66.7% vs. 69.5%, P=0.43). presence of NRAS mutant (RR 33.4% vs. 30.2%, P=0.26). overall survival (OS) was associated with the presence of NRAS mutations. This adverse prognosis associated with NRAS mutations was also observed in terms of disease (P=0.007). Univariate analysis showed that unfavorable prognostic factors for DFS were cytogenetic data (P = 0.005) and the NRAS gene mutation (P = 0.002). mutant did not contribute to increase the disease recurrence, however NRAS was found to be a poor prognostic factor for children with AML. Further studies to confirm these findings are required because of the small number of patients with NRAS mutation.
: NRAS mutations are the most commonly detected molecular abnormalities in hematologic malignancies, especially in those of myeloid origin.
We aimed to determine the frequency of NRAS (NRAS mutant ) mutation; and its prognostic with acute myelogenous leukemia (AML). Peripheral blood and bone marrow (BM) samples were taken from 39 de novo pediatric AML patients. Twenty subjects with matched age and sex were selected as a control group. Samples from patients and control were analyzed for Exons 1, 2 of NRAS gene using NRAS mutations at the time of diagnosis was found in 6/39 (15.4%) AML cases. Patients had no significant improved clinical outcome than patients w had similar complete remission (CR) rates compared with non patients (66.7% vs. 69.5%, P=0.43). Those in CR had a similar relapse rate regardless of the (RR 33.4% vs. 30.2%, P=0.26). However, an adverse prognosis for 3 year overall survival (OS) was associated with the presence of NRAS mutations. This adverse prognosis associated with NRAS mutations was also observed in terms of disease sis showed that unfavorable prognostic factors for DFS were cytogenetic data (P = 0.005) and the NRAS gene mutation (P = 0.002).
did not contribute to increase the disease recurrence, however NRAS stic factor for children with AML. Further studies to confirm these findings are required because of the small number of patients with NRAS mutation. ), which permits unrestricted use, distribution, and reproduction in any medium, are the most commonly detected molecular abnormalities ) mutation; and its prognostic Peripheral blood and bone marrow (BM) samples were taken from 39 de novo pediatric AML patients. Twenty subjects with matched age and sex were selected as a control d control were analyzed for Exons 1, 2 of NRAS gene using NRAS mutations at the time of diagnosis was found in 6/39 (15.4%) AML cases. Patients had no significant improved clinical outcome than patients without mutation. had similar complete remission (CR) rates compared with non-mutated Those in CR had a similar relapse rate regardless of the However, an adverse prognosis for 3 year overall survival (OS) was associated with the presence of NRAS mutations. This adverse prognosis associated with NRAS mutations was also observed in terms of disease-free survival (DFS) sis showed that unfavorable prognostic factors for DFS were did not contribute to increase the disease recurrence, however NRAS mutant stic factor for children with AML. Further studies to confirm these findings are required because of the small number of patients with NRAS mutation.
suppression of normal hematopoiesis. Cytogenetic and molecular studies have defined AML as a heterogeneous disease. 1 The presence of defined karyotypes is among the most important prognostic factors in acute myeloid leukemia (AML). However, even within defined cytogenetic groups stability of remission and long-term survival may vary significantly.
Therefore, additional recurrent aberrations may have a prognostic impact. 2 It has been showed that Pediatric AML patients may harbor more than one mutation at diagnosis, some of which with a possible prognostic impact. [3][4][5][6][7] Mutations in the NRAS gene are one of these genetic aberrations that play a role in myeloid neoplasia. 8 NRAS gene plays an important role in the regulatory processes that govern proliferation, differentiation and apoptosis; 9 abnormality in this gene has been implicated in the pathogenesis of AML. RAS oncogenes encode a family of membrane-associated proteins, which regulate signal transduction upon binding to a variety of membrane receptors. 10 There are three functional RAS genes (NRAS, KRAS and HRAS); in AML. K-RAS mutation occurs to a lower but still significant frequence in pediatric AML. 11 NRAS is the most prominent; reported in 11%-30% of patients. 12 All homologs were exclusively in codons 12, 13, and 61 conferring constitutive activation of the RAS protein, which is subsequently held in the GTP bound status leading to an increased activity of the RAS pathway causing an increased proliferation and a decreased apoptosis rate. 13 RAS mutations were described in the various solid tumors as well as in hematologic malignancies. The prognostic impact of NRAS mutations is still under research and seems to vary from disease to disease, 14 several studies indicated a poor prognostic impact for that mutation, 14,15 and however Lapillonne et al confirmed this finding in pediatric AML. 16 On the contrary, Neubauer et al found a favorable outcome for malignancies with NRAS mutations, 12 and some studies failed to define any prognostic impact for NRAS mutations. 10,13,17 We undertook this study to determine the frequency of NRAS mutation and its prognostic significance in a number of Egyptian pediatric patients with AML.
Patients and Methods.
Newly diagnosed pediatric AML patients were included in this study; cases were recruited from pediatric hematology clinics of Tanta University Hospitals, Tanta, Egypt. Peripheral blood and BM samples were obtained from 39 de novo AML cases at initial diagnosis obtaining an informed consent from patients or their guardians; they were 21 boys and 18 girls. The median age was 7.4 years (range, 5.6-13 years). The median percentage of blasts in the fresh bone marrow samples was (65%). All included patients were receiving the same treatment protocol approved by the Oncology Team of Tanta University Hospital TUH; in brief they received 1-2 cycles of 14-21 days of intensively timed induction chemotherapy (doxorubicin, Ara-C, 6-thioguanine and methotrexate) depending upon BM aspiration done at the end of each induction course. Additional consolidation regimens included 1-2 cycles of 12-days (doxorubicin, Ara-C, VP-16 and methotrexate). Then patients who did not achieve remission received intermittent chemotherapy (Ara-C, 6-thioguanine and methotrexate) every 3 months for 6 cycles with the standard follow-up care and regular BM aspiration every 21 days to confirm remission; complete remission means normocellular bone marrow contain less than 5% blast cells and showing evidence of normal maturation of other bone marrow elements as evidenced by the repeated BM aspiration. The average duration of follow up was mean ± SD (32 ± 2.24 months).
Patients were classified according to the standard methods; morphological according FAB classification, cytochemical and immunological evaluation. 13 Informed consent was obtained from twenty subjects with matched age and sex who were selected as a control group. Samples from patients and control were analyzed for mutation in Exons 1, 2 of the NRAS gene using genomic PCR method.
Cytogenetic Analysis. Cytogenetic investigations were performed by karyotyping G-banding analysis in all patients. 18 PCR of NRAS Gene. Genomic DNA was extracted from diagnostic bone marrow specimens of patients and control using the QIAamp DNA blood mini kit for DNA extraction provided by QIAGEN (Inc Chasworthy, CA). The concentration of extracted DNA was then measured by UV spectrophotometry at 260 & 280 nm and analyzed by electrophoresis on 2% agarose gel for detection of purity.
Separate assays were developed for mutation detection at (hot spots) in codons 12/13 (exon 1) and codon 61 (exon 2 Single-strand conformation polymorphism (SSCP) was performed to PCR product to detect mutations. Products were mixed with 10 volumes of loading buffer, quenched on ice immediately, and applied to 5% polyacrylamide gel electrophoresis at 50 V, overnight, stained by silver nitrates and wrapped in plastic foil. Normal gene exhibits a specific conformational pattern, while mutant gene displays pattern with different electrophoretic mobility (mobility shift) which was confirmed by repeated SSCP (Figure 1). Statistical Methods. Data were processed and analyzed using SPSS for windows version 16.0 (SPSS, Inc, Chicago, IL, USA). Qualitative data were expressed as frequency and percentage and quantitative data were expressed as median. Chi-square test was used for comparative analysis. The prevalence of NRAS mutations in AML was too low to permit statistical analysis for correlation with survival. Kaplan-Meier analysis was used for survival of patients. The prognostic significance of the clinical variables was assessed using the Cox proportional hazards model. For all analyses, the P values were twotailed, and a P value of less than 0.05 was considered statistically significant. (16) is presented with highest WBC count(median 41 x10 3 /µL), and t(8;21)AML patients with lowest WBCs (median 18 x10 3 /µL) compared with the other cytogenetic groups ( Table 2). The median ages of children with Inv (16) AML (7 years), with t(8;21) (8.3 years) , with del(7) (8.5), and with del 5(8.9 years) were younger compared with CN-AML (9.5 years).
In the FAB subtype M4e, NRAS mutations were represented more frequently (50%, 3 of 6) than they were in all other subtypes. In the M3, M5, M6 and M7 subtypes, no NRAS mutation was detected, making NRAS mutations highly underrepresented in these subtypes. In all other FAB subtypes, the distribution of NRAS mutations did not differ significantly from each other. A detailed distribution of NRAS mutations in the respective FAB subtypes is presented in Table 3.
Based on the previous findings, we analyzed the influence of NRAS mutations on the prognosis of pediatric AML patients for whom clinical follow up data were available. Table 4 shows the clinical outcome in pediatric AML patients. In the total group, there was no difference with regard to CR rate (NRAS mutant , 66.7%; NRAS wild , 69.5%; P = 0.43). Relapse was significantly more frequent in the AML patients with NRAS gene mutations (NRAS muant , 33.4 %; NRAS wild , 30.2 %; P= 0.26).
Discussion.
In recent years, a major focus of molecular cancer research has been the analysis of genes that may be causative in carcinogenesis (oncogenes). The clinical significance of RAS mutations has not been uniformly established. In the current study, we evaluated the clinical significance of NRAS mutations and investigated NRAS mutant by genomic PCR method in 39 newly diagnosed pediatric AML cases. Activated RAS mutations confer proliferative and survival signals. Mutations in the NRAS gene are frequent genetic aberrations in adult AML. 19 However, there have been only a few studies on childhood AML. 20 With different mutation-detection techniques used and heterogeneous patient populations studied, the reported incidence of NRAS mutant in patients with childhood AML at presentation vary considerably; in our study 15.4% of pediatric AML patients (6/39) had NRAS mutant , corresponding to the reported frequency by others. 15,21 Primary analyses revealed a statistically significant association between peripheral and bone marrow blast counts and NRAS mutation (P=0.01,P=0.04 respectively), however no significant differences had been found between the two groups with respect to age, gender, platelet count and WBCs count. These findings are in agreement with those reported in literatures. 10,13 The highest frequency (33.3%) of NRAS mutations in our cohort (2/6) compared with the total cohort was detected in patients with inv (16). The high incidence of NRAS mutations in inv (16) in our study corresponded with most of the previously published studies reporting frequencies of 26% to 33% (22,23). 11q23/MLL aberrations are a frequent abnormality in pediatric AML. 24,25,26 The frequency of 11q23/MLL-rearranged AML may have been underestimated because of low number of cases in the included study and because in our study as well as in other studies performed in the past, the cryptic MLL rearrangements may be not detected by conventional karyotyping. It is conceivable that the biological differences may lead to different treatment strategies for these age categories in the future. 27 In our study, oldest children with AML were characterized by a high frequency of normal cytogenetic (53.8%) but the very young in the included study are characterized by higher frequency of inv (16).
The prognosis of AML depends on factors such as age, initial leukocyte count, FAB classification, karyotype, immune phenotype, and response to remission-induction therapy. 28,29 Our study showed that cytogenetic was unfavorable prognostic factor among AML patients by univariate analysis. This was in agreement with other study who found that cytogenetic data is thought to be the most important prognostic factor for AML. 30 The prognostic significance of NRAS mutation in both adults and children remains disputed. Generally NRAS gene mutation is associated with tumor progression and was reported to be associated with poor prognosis in solid tumors and acute lymphoblastic leukemia (ALL). 31,32 Published reports addressing the clinical significance of NRAS mutations in patients with acute myeloid leukemia are inconclusive. Whereas some studies demonstrated a beneficial clinical effect of NRAS mutations, 33,34 others reached a different conclusion (e.g. lower CR). 35 Other studies also did not that show that patients with NRAS mutations had significantly good outcomes. 36,37 In this study, the presence of NRAS gene mutation was related to similar complete remission (CR) rates following induction chemotherapy compared with nonmutated patients (66.7% vs. 69.5%, P=0.43). Those in CR had a similar relapse rate regardless of the presence of NRAS mutations (RR 33.4% vs. 30.2%, P=0.26). However, the presence of NRAS mutations was associated with poor three years OS and DFS compared with wild type cases (OS, P=0.01; DFS, P=0.007). This discrepancy between these studies findings and our study may be explained by differences in the intensity of the chemotherapy protocols employed to treat this group of patients and the small number of our cases.
Conclusions. In addition to the evidence that activation of the RAS-signaling cascade contributes to the molecular pathogenesis of myeloproliferative disorders (38), NRAS mutation has adverse prognostic impact but further pediatric studies will be necessary to extend our knowledge and more precisely define the prognostic significance of NRAS mutations. This study also demonstrates the need to screen for specific translocation partners to allow appropriate treatment stratification like WT1 and FLT3. | 3,428.6 | 2011-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Inducing a desired value of correlation between two point-scale variables: a two-step procedure using copulas
Focusing on point-scale random variables, i.e. variables whose support consists of the first m positive integers, we discuss how to build a joint distribution with pre-specified marginal distributions and Pearson’s correlation ρ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho $$\end{document}. After recalling how the desired value ρ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho $$\end{document} is not free to vary between -1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-1$$\end{document} and +1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$+1$$\end{document}, but generally ranges a narrower interval, whose bounds depend on the two marginal distributions, we devise a procedure that first identifies a class of joint distributions, based on a parametric family of copulas, having the desired margins, and then adjusts the copula parameter in order to match the desired correlation. The proposed methodology addresses a need which often arises when assessing the performance and robustness of some new statistical technique, i.e. trying to build a huge number of replicates of a given dataset, which satisfy—on average—some of its features (for example, the empirical marginal distributions and the pairwise linear correlations). The proposal shows several advantages, such as—among others—allowing for dependence structures other than the Gaussian and being able to accommodate the copula parameter up to an assigned level of precision for ρ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho $$\end{document} with a very small computational cost. Based on this procedure, we also suggest a two-step estimation technique for copula-based bivariate discrete distributions, which can be used as an alternative to full and two-step maximum likelihood estimation. Numerical illustration and empirical evidence are provided through some examples and a Monte Carlo simulation study, involving the CUB distribution and three different copulas; an application to real data is also discussed.
Introduction
Datasets arising in the social sciences often contain ordinal variables. Sometimes they are genuine ordered assessments (judgements, preferences, degree of liking of a product or adhesion to a sentence, etc.), whereas in other circumstances they are discretized or categorized for convenience (age of people in classes, education achievement, levels of blood pressure, etc.) The former situation often arises when a survey is administered to a group of people being studied, e.g. questionnaires submitted by a company to their customers with the aim of assessing their level of satisfaction towards a product or service the company has provided. Respondents choose a qualitative assessment on a graduated sequence of verbal definitions (for instance, "extremely dissatisfied", "very dissatisfied" , … "very satisfied", "extremely satisfied"), also known as "Likert scale", which can be coded as integers numbers ( 1, 2 … , m ) just for convenience: this amounts to assuming that the categories are evenly spaced (Iannario and Piccolo 2012). There are several statistical models and techniques that can be employed for handling multivariate ordinal data without trying to quantify their ordered categories. (The review by Liu and Agresti (2005) and the later textbook of Agresti (2010) give a thorough treatment.) Among them, correlation models and association models both study departures from independence in contingency tables and involve the assignment of scores to the categories of the row and column variables in order to maximize the relevant measure of relationship (the correlation coefficient in the correlation models or the measure of intrinsic association in association models, see Faust and Wasserman 1993). We also remind nonlinear principal component analysis (NLPCA), which is a special case of a multivariate reduction technique named homogeneity analysis and which can be usefully applied in customer satisfaction surveys (Ferrari and Manzi 2010) for mapping the observed ordinal variables into a one-dimensional (or, more generally, lower-dimensional) quantitative variable. Neither the weights of the original variables nor the differences between their categories (this is the distinction with standard principal component analysis) are assumed a priori.
However, substituting the ordered categories with the corresponding integer numbers, though representing just an arbitrary assumption, is still quite a common and accepted practice, which leads to further multivariate statistical analyses handling them as (correlated) discrete variables (Norman 2010;Carifio and Perla 2008). Now, one may be interested in building and simulating a multivariate random vector whose univariate components are point-scale variables. In fact, describing a real phenomenon by creating mirror images and imperfect proxies of the (partially) unknown underlying population in a repeated manner allows researchers to study the performance of their statistical methods through simulated data replicates that mimic the real data characteristics of interest in any given setting (Demirtas and Yavuz 2015;Demirtas and Vardar-Acar 2017). This is often necessary since exact analytic results are seldom available for finite sample sizes, and thus, simulation is required to assess the reliability, validity, and plausibility 1 3 Inducing a desired value of correlation between two point-scale… of inferential techniques and to evaluate to which extent they are robust to deviations from statistical assumptions. However, rather than completely detailing a joint distribution for modelling the phenomena under study, it is often more convenient and realistic to specify only the marginal distributions and pairwise correlations, which are very easy to interpret and whose sample analogues can be easily computed on the dataset at hand one wants to "reproduce". In Lee (1997), some methods are described for generating random vectors of categorical or ordinal variables with specified marginal distributions and degrees of association between variables. For ordinal variables, a common index for measuring association is Goodman and Kruskal's coefficient (Kruskal and Goodman 1954;Ruiz and Hüllermeier 2012), ranging between −1 and +1 , with zero corresponding to independence. A first proposal is based upon using convex combination of joint distribution with extremal values of (extremal tables); a second one relies on threshold arguments and involves Archimedean copulas. In this paper, we suggest a sort of modification of this latter method to correlated point-scale variables.
In the following, we will limit our analysis to the bivariate case, which is by far easier to deal with, but whose results, with some caution, can be generalized to the multivariate context. We consider two point-scale random variables (rvs), X 1 and X 2 , defined over the support spaces X 1 = 1, 2, … , m 1 and X 2 = 1, 2, … , m 2 , respectively, with probability mass functions p 1 (i) = p i⋅ = P(X 1 = i), i = 1, … , m 1 , and p 2 (j) = p ⋅j = P(X 2 = j), j = 1, … , m 2 . We want to determine some bivariate probability mass function p ij = p(i, j) = P(X 1 = i, X 2 = j), i = 1, … , m 1 ;j = 1, … , m 2 such that its margins are p 1 and p 2 and the correlation X 1 ,X 2 is equal to an assigned . In order to give an answer to this question, we have first to recall two properties of Pearson's correlation, which apply to both the continuous and, to even a larger extent, the discrete case; this is the topic of Sect. 2. In Sect. 3, we first state the problem of finding a joint probability function with assigned margins and correlation in general terms; then, we focus on a particular class of joint distributions, recalling how to build copula-based bivariate discrete distributions; finally, we describe the proposed procedure for inducing a desired value of correlation between two point-scale variables. Section 4 illustrates an application to CUB distributions. Section 5 recalls inferential procedures for dependent rvs and based on the algorithm of Sect. 3 devises a sort of moment method for estimating the dependence parameter of the copula-based bivariate distribution. Section 6 describes a Monte Carlo simulation study whose aim is to comparatively assess the statistical performances of the new inferential method and the existing ones based on maximum likelihood. Section 7 provides an application to a real data set. In the concluding section, some final remarks are provided.
Attainable correlations between two random variables
Pearson's linear correlation is by far the most popular measure of correlation between two quantitative variables.
It is often employed to measure correlation also between Likert scale variables, which arises some criticism.
3
Despite the many useful properties it enjoys, it also reveals some disadvantages. A first drawback of Pearson's correlation is that given two marginal cumulative distribution functions (cdfs) F 1 and F 2 and a correlation value ∈ [−1, +1] , it is not always possible to construct a joint distribution F with margins F 1 and F 2 , whose correlation is equal to the assigned . This is an issue that is often underrated if not neglected by researchers (Leonov and Qaqish 2020). We can state this result (often reported as "attainable correlations", see McNeil et al. 2005, pp. 204-205) in the following way. Let (X 1 , X 2 ) be a random vector with marginal cdfs F 1 and F 2 and an unspecified joint cdf; assume also that Var(X 1 ) > 0 and Var(X 2 ) > 0 . The following statements hold: 1. The attainable correlations form a closed interval [ min , max ] with min < 0 < max . 2. The minimum correlation = min is attained if and only if X 1 and X 2 are countermonotonic. The maximum correlation = max is attained if and only if X 1 and X 2 are comonotonic. 3. min = −1 if and only if X 1 and −X 2 are of the same type, and max = 1 if and only if X 1 and X 2 are of the same type.
For point-scale rvs X 1 and X 2 , it is then clear that the maximum correlation is +1 if and only if they are identically distributed: In the general case, the values min and max can be computed by building the cograduation and countergraduation tables (see Salvemini 1939 andBarbiero 2012 for an example of calculation).
A second result about Pearson's correlation can be resumed as follows: given two margins F 1 and F 2 and a feasible linear correlation (i.e. a value falling within the interval [ min , max ] ), the joint distribution F having margins F 1 and F 2 and correlation is not unique. In other terms, the marginal distributions and pairwise correlation of a bivariate rv do not univocally determine its joint distribution. Even if this second fallacy may represent a limit from one side, on the other side represents a form of flexibility, since it means that given two pointscale distributions and a consistent value of , there are generally several (possibly, infinite) different ways to join them into a bivariate distribution with that value of correlation, as we will see in the next two sections.
A procedure for inducing a desired value of correlation between two point-scale random variables with assigned marginal distributions
We will now state the problem object of this work in general terms; then, resorting to copulas, we will reformulate it in a more specific context. Somehow, we will split the original problem into two sequential sub-problems: (i) finding a 1 3 Inducing a desired value of correlation between two point-scale… family of joint distributions with the assigned margins, (ii) finding within this family a distribution with the desired value of correlation.
Statement of the problem
The problem of finding a bivariate point-scale distribution with assigned marginal distributions and Pearson's correlation can be laid out as follows. We have to find the m 1 × m 2 probabilities p ij , 0 ≤ p ij ≤ 1 , defining the joint pmf of the rv (X 1 , X 2 ) , which satisfy the following system of equalities: The first two equalities correspond to the request of matching the assigned marginal distributions and the latter to the assigned correlation. The total number of equality constraints is m 1 + m 2 ( m 1 + m 2 − 1 actual constraints on the two margins, plus one on Pearson correlation).
If, for example, m 1 = m 2 = 2 (i.e. if X 1 and X 2 are shifted Bernoulli rvs), then we have a system of 4 equations in 4 variables, which yields a unique solution for the p ij . In this case, in fact, one can easily prove that if the assigned falls within the bounds min and max , equal to the probabilities satisfying the system (1) are For higher values of m 1 (and m 2 ) the solution is not unique; generally, there are infinite solutions (i.e. bivariate distributions) satisfying system (1)-given that the bivariate correlation bounds are respected (i.e. min ≤ ≤ max ). We illustrate it through the following example.
Instead of considering the whole set of feasible joint probabilities p ij obtained by solving system (1), one can restrict the analysis to a particular subset of these solutions satisfying the first two constraints on margins. For this aim, we will now recall the concept of copula and copula-based bivariate discrete distributions.
Generating bivariate discrete distributions having the pre-specified margins
Using copulas represents a straightforward solution for easily constructing a multivariate distribution respecting the assigned margins. A d-dimensional copula C is a joint cdf in [0, 1] d with standard uniform margins U j , j = 1 … , d: The importance of copulas in studying multivariate cdfs is summarized by the Sklar's theorem (McNeil et al. 2005), whose version for d = 2 states that if F 1 and F 2 are the cdfs of the rvs X 1 and X 2 , the function defines a valid joint cdf, whose margins are exactly F 1 and F 2 . This result keeps holding if X 1 and X 2 are point-scale rvs; in this case, the joint pmf can be derived from (2) as: for i = 1, … , m 1 ;j = 1, … , m 2 . It is worth noting that given a joint cdf F with margins F 1 and F 2 , Sklar's theorem also states that there exists a copula C such that F can be written as in (2). This copula is unique if F 1 and F 2 are continuous; on the contrary, uniqueness is not guaranteed if they are discrete.
There exists a multitude of copulas, which-as happens for joint cdfs-usually depend on some parameter ; we will now review three well-known parametric copula families.
The Gauss copula
The d-variate Gauss copula is the copula that can be extracted from a d-variate normal vector Y Y Y with mean vector and covariance matrix and is exactly the same as the copula of X X X ∼ N d (0 0 0, P) , where P is the correlation matrix of Y Y Y . In two dimensions, it can be expressed, for Ga ≠ ±1 , as: ds 1 ds 2 .
The Frank copula
The one-parameter bivariate Frank copula is defined as with ≠ 0 . For → 0 , we have that the Frank copula reduces to the independence copula; for → +∞ , it tends to the comonotonicity copula; for → −∞ , it tends to the countermonotonicity copula.
The Plackett copula
The one-parameter bivariate Plackett copula is defined as with ∈ (0, +∞) ⧵ {1} . When → 1 , it reduces to the independence copula, whereas for → 0 it tends to the countermonotonicity copula and for → ∞ to the comonotonicity copula.
An algorithm for inducing any feasible value of correlation within a parametric copula-based family of distributions
In order to induce any feasible value of correlation between the two discrete margins of the distribution (2), we have further to impose that the copula C(⋅; ) is able to encompass the entire range of dependence, from perfect negative dependence (which leads to the linear correlation min ) to perfect positive dependence ( max ). Copulas enjoying this property are named "comprehensive"; the three copulas recalled in the previous section are all comprehensive.
Once the marginal distributions of X 1 and X 2 are assigned, and the parametric copula C(⋅; ) has been selected, their correlation coefficient X 1 ,X 2 will depend only on the copula parameter ∈ [ min , max ] ; this relationship may be written in an analytical or numerical form, say X 1 ,X 2 = g( |F 1 , F 2 ) . Since the function g is not usually analytically invertible, inducing a desired value of correlation between two point-scale variables, falling in [ min , max ] , by setting an appropriate value of , is a task that can be generally done only numerically, by finding the (unique) root of the equation g( ) − = 0 . If g( ) is a monotone increasing function of the copula parameter, it can be implemented by resorting to the following iterative procedure (see a similar proposal for the Gauss copula in Ferrari and Barbiero 2012; Barbiero and Ferrari 2017; and an early extension to other copulas in Barbiero 2018): 1. Set (0) = (with being the value of for which the copula C reduces to the independence copula); (0) = 0. 2. Set t ← 1 and = (t) , with (t) some value strictly greater (smaller) The iterative process at the basis of the algorithm is quite clear (see Fig. 1): one starts from two points in the ( , ) Cartesian diagram: A = ( (0) , (0) ) and B = ( (1) , (1) ) , where (1) can be chosen arbitrarily, respecting the unique condition that the resulting (1) has the same sign as the target . From these two points, one derives the next value of , (2) (corresponding to the abscissa of point C), by linear interpolation, considering the slope m (2) , associated with the line passing through them, respecting the lower or upper bounds of (this is why the min and max operators appear in the recursive formulas for (t) of step 6); the procedure then continues, computing the actual value (2) (ordinate of point D), and then iteratively updating (t) (and computing (t) ) by taking into account just the last two points for determining the updated m (t) .
The above heuristic algorithm makes sense if g is a monotone increasing function, which is often the case: for the Gauss, Frank, and Plackett copulas, the linear correlation is an increasing function of the dependence parameter , keeping fixed the two marginal distributions. In fact, let us recall that we say that the joint cdf F(x 1 , x 2 ; ) , with fixed margins F 1 and F 2 , is "increasing in concordance" as increases if, for any 2 > 1 , Then, it follows (see, for example, Scarsini and Shaked 1996) Since the Gauss copula, the Frank copula, and the Plackett copula are all increasing in concordance with respect to their parameter, i.e. (Joe 2014) and the same holds for the joint cdf F(x 1 , x 2 ; ) = C(F 1 (x 1 ), F 2 (x 2 ); ) , then we can claim that X 1 ,X 2 is increasing in .
A very particular case is represented by the Gauss copula. A known theoretical result, which goes under the name of Lancaster theorem (Lancaster 1957, p.290), and later reported for example in Cario and Nelson (1997), allows us to claim that the correlation between the discrete rvs X 1 and X 2 has the same sign of and in absolute value it is not greater than the Gauss copula correlation: sgn( X 1 ,X 2 ) = sgn( Ga ) and | X 1 ,X 2 | ≤ | Ga | . Therefore, a reasonable value for the starting value (1) ∶= (1) Ga is the value of the target correlation itself.
The advantage of the proposed algorithm stands in the four following (connected) features: (i) the flexibility in the choice of the underlying copula, which can be different from the Gaussian and is just required to span the entire dependence range, (ii) the capacity of finding the appropriate value of without making use of any sample from the two marginal distributions, thus avoiding introducing sampling errors, (iii) the possibility of controlling a priori the error (absolute difference between target and actual value of X 1 ,X 2 )-setting equal to 10 −7 generally allows to recover in a few steps, and (iv) the absence of inner potentially time-consuming optimization or root finding routines.
Existing procedures for solving the same (or a similar) problem are available in the literature, but do not enjoy all the features above mentioned. For example, the proposal by Demirtas (2006) is based on simulating binary data whose marginals are derived collapsing the prespecified marginals of ordinal variables. The correlation matrix of the binary variables is obtained by an iterative process in order to match the target correlation matrix for ordinal data, which requires the generation of a "huge" bivariate sample of binary data.
Other proposals by Madsen and Dalthorp (2007), Ferrari and Barbiero (2012), or Xiao (2017) for simulating multivariate ordinal/discrete variables with assigned margins and correlations exclusively address the dependence structure induced by the Gauss copula. Lee and Kaplan (2018) proposed two procedures based on the principles of maximum entropy and minimum cross-entropy to simulate multivariate ordinal variables with assigned values of marginal skewness and kurtosis; they rely on the multivariate normal distribution as a latent variable. Foldnes and Olsson (2016) proposed a simulation technique for nonnormal data with pre-specified skewness, kurtosis, Inducing a desired value of correlation between two point-scale… and covariance matrix, by using linear combinations of independent generator (IG) variables; its most important feature is that the resulting copula is not Gaussian. In Nelsen (1987), using convex linear combinations of the pmfs for the discrete Fréchet boundary distributions (i.e. those corresponding to comotonicity and countermonotonicity) and the pmf for independent rvs, the author constructs bivariate pmfs for dependent discrete rvs with arbitrary marginals and any correlation between the theoretical minimum and maximum. A similar rationale has been later used by Demirtas and Vardar-Acar (2017) for devising an algorithm for inducing any desired Pearson or Spearman correlation to independent bivariate data whose marginals can be of any distributional type and nature. The algorithm we proposed, though limited to point-scale rvs, is much more flexible as it allows a much broader choice of dependence structures, whereas the latter two procedures employ a convex combination of bivariate comonotonicity and countermonotonicity copulas. (An analogous way was followed by Lee (1997) for the first method.) Obviously, in the simplest case of a bivariate shifted Bernoulli ( m 1 = m 2 = 2 ), the proposed algorithm recovers (numerically) the same unique bivariate distribution yielded (analytically) by system (1), whatever copula is selected (provided it spans the entire dependence spectrum, i.e. it is comprehensive). For the case of dependent Bernoulli rvs, see also the example presented in McNeil et al. 2005, p.188.
We remark that even if we mentioned three well-known comprehensive copulas (Gauss, Frank, and Plackett), which are all exchangeable and radially symmetric (see, for example, McNeil et al. (2005), chapter 5), exchangeability and radial symmetry are not necessary conditions for the algorithm to work; thus, the proposed procedure is able to deal with asymmetrical dependence, which often occurs in many fields, especially in finance. The comprehensive property is instead required if one wants to span the entire range of feasible linear correlations between the two point-scale rvs. If one uses the Gumbel or the Clayton copula (both belonging to the broad class of Archimedean copulas) to induce correlation between the rvs, since these two copulas can only model positive dependence through their scalar parameter , it descends that only positive (or at most null) values of linear correlation can be induced and then assigned. A useful reference is Table 4.1 in Nelsen (2006), where some important one-parameter families of Archimedean copulas are listed along with some special and limiting cases; from there, it is thus possible to distinguish and select comprehensive copulas.
The algorithm presented in this section is naturally conceived for one-parameter copulas, but it can be extended to p-parameter copulas, p ≥ 2 (just think of Student's t copula); in this case, p − 1 higher-order correlations or co-moments need to be assigned along with the usual linear correlation in order to calibrate all the parameters.
Extension to multivariate context
The extension of the proposed procedure to dimension d > 2 -finding a joint distributions with d assigned margins and d(d − 1)∕2 distinct pairwise correlations-is not straightforward at all, but this is due to theoretical rather than computational reasons. To explain it, we will refer to a counterexample described in Bergsma and Rudas (2002) and Chaganty and Joe (2006). Let X, Y and Z be three correlated binary rvs, each with support {0, 1} , such that P(X = 1) = P(Y = 1) = P(Z = 1) = 0.5 (nothing changes if we select {1, 2} as common support). Based on the bivariate correlation bounds (see point 3 in Sect. 2) the three correlation coefficients XY , XZ , YZ can lie in the interval [−1, +1] . However, if we choose XY = YZ = 0.4 and XZ = −0.4 , then a trivariate distribution for (X, Y, Z) does not exist. So, a first type of problem is related to the feasibility of the (assigned) correlation matrix P for the discrete d-variate random vector: even if all the bivariate correlation bounds are respected and even if the matrix P reckoning all the pairwise correlations is a valid correlation matrix (a symmetric matrix with all ones on the main diagonal, which is positive semidefinite), nevertheless it may be impossible to construct a random vector with the d assigned margins and correlation matrix P. A second type of problem is related to the nearly lack of copulas able to calibrate-through their parameters-the values of the resulting pairwise correlation coefficients. The typical generalizations of the Frank and Plackett copulas, which we discussed in Sect. 3.2, are still characterized by a unique scalar parameter. Assigning arbitrary-though feasible-values to the pairwise correlations of the discrete random vector, generally would lead to no solution in terms of the dependence parameter. A way to overcome this issue is resorting to a copula whose number of parameters is at least equal to the number of distinct pairwise correlations: an obvious candidate is the Gauss copula; a richer option is represented by the t copula. Alternatively, one can resort to pair copula construction through vines, which are graphical models that represent high-dimensional distributions and can model a rich variety of dependencies. Another limitation of the algorithm of Sect. 3.3 is that it only handles rvs with finite supports: extensions to count variables need to seek for an accurate approximation of the correlation coefficient at step 5, possibly entailing a truncation of the support of the joint distribution of steps 3 and 4, in order to compute the correlation coefficient through a double finite summation.
Pseudo-random simulation
Simulating samples from a bivariate rv with assigned point-scale margins and correlation, built according the procedure described in Sects. 3.2 and 3.3, is straightforward. One can resort to the following general algorithm for meta-copula distributions: 1. Simulate a random sample (u 1 , u 2 ) from the copula C(u 1 , u 2 ; ) , where is the value of the copula parameter recovered through the algorithm of Section 3.3; 2. Set x 1 = F −1 1 (u 1 ) and x 2 = F −1 2 (u 2 ) , where F −1 1 and F −1 2 are the generalized inverse functions of F 1 and F 2 , respectively; 3. (x 1 , x 2 ) is a random sample from the target bivariate distribution, with copula C(u 1 , u 2 ; ) and margins F 1 and F 2 .
Inducing a desired value of correlation between two point-scale… Alternatively, since both X 1 and X 2 have a finite support space, one can resort to the "inversion algorithm" described in (Devroye (1986), pp.85,559) and used in Lee (1997): in this case, one directly considers the joint probability mass function p(i, j) of Eq.
(3) and proceeds as follows: 1. Set N = m 1 × m 2 ; let t , t = 1, … , N be the joint probabilities p ij arranged in descending order; 2. Let y t be the corresponding possible values of Y = (X 1 , X 2 ) , arranged similarly; 3. Define z 0 , z 1 , … , z N in the following way: 4. Simulate a random number u from a standard uniform rv; 5. Return y t , where z t−1 < u ≤ z t .
Application to CUB random variables
A CUB rv X is defined as the mixture, with weights and 1 − , of a shifted binomial with parameters m and and a discrete uniform distribution over the support Corduas (2011) proposed using the Plackett copula in order to construct a one-parameter bivariate distribution from CUB margins; this proposal was later investigated by Andreis and Ferrari (2013), also in a multivariate direction. Here, we reprise and extend these attempts of constructing a bivariate CUB rv, by resorting to the results discussed in Section 3.2. Let us suppose that we want to build a bivariate model with margins X 1 ∼ CUB(m 1 = 5, 1 = 0.4, 1 = 0.8) and X 2 ∼ CUB(m 2 = 5, 2 = 0.7, 2 = 0.3) ; we can find the values of the attainable correlations by using the function corrcheck in GenOrd (Barbiero and Ferrari 2015a), which returns the values min = −0.952003 and max = 0.8640543 . We proceed and select a desired feasible value of correlation between the two CUB variates, say = 0.6. Afterward, we recover the values of Ga (for the Gauss copula), (for the Frank copula), and (for the Plackett copula), according to the iterative procedure illustrated in the previous section. By setting = 10 −7 , we obtain Ga = 0.6898959 , = 5.453455 , and = 11.30106 . Table 3 reports the detailed iterations of the algorithm for the three copula-based models. Note that since > 0 , for the Frank copula we selected a value (1) larger than zero (tentatively, (1) = 1 ) and for the Inducing a desired value of correlation between two point-scale… Plackett copula a value (1) larger than one (tentatively, (1) = 2 ). For the Gauss copula, we set (1) = = 0.6. The three joint pmfs, sharing the same value of linear correlation, are reported in Table 4. It is easy to notice the differences among them. For example, the probability p 23 = P(X 1 = 2, X 2 = 3) takes the values 0.0922, 0.0948, and 0.1008, across the three joint distributions. Figure 2 displays the relationship between the copula parameter (on the x axis) and the corresponding Pearson's correlation of the example bivariate CUB model with Gauss, Frank, and Plackett copula, constructed by applying steps 3 ÷ 5 of the proposed algorithm over a dense grid of uniformly spaced values of . The almost linear trend for the Gauss copula can be easily noted (in this case, the copula parameter is itself a correlation coefficient!), whereas for the other two it is obviously (highly) nonlinear, due also to the unbounded domain for and . From the figure, one can state that X 1 ,X 2 is a monotone concave function of the copula parameter for the Plackett copula, and for the Frank copula only when > 0 : this explains the increasing nature of the sequences (t) in Tables 3b and 3c.
Inferential aspects
If we have a bivariate ordinal sample (x 1t , x 2t ) , t = 1, … , n , that we assume has been drawn from a joint cdf F(x 1 , x 2 ; , 1 , 2 ) = C(F 1 (x 1 ; 1 ), F 2 (x 2 ; 2 ); ) , then parameters' estimation can be carried out through different inferential techniques. First, let us define the log-likelihood function as ( 1 , 2 , ) = ( 1 , 2 , |(x 11 , x 21 ), … , (x 1n , x 2n )) = Inducing a desired value of correlation between two point-scale… with p being the joint pmf of Eq.(3) and n ij the absolute joint frequency of (x i , x j ) , i = 1, … , m 1 , j = 1, … , m 2 . We now present three possible ways to perform point estimation of the distribution's parameters: the first one is the customary maximum likelihood method; the second one is a modification thereof, whose use with copula models is however quite consolidated; the third one is directly suggested by the simulation procedure of Sect. 3.3 and can be regarded to as a by-product of the methodology presented in Sect. 3.
Full maximum likelihood
The most standard estimation technique consists of maximizing with respect to all the three parameters (or parameter vectors) simultaneously. This task can be usually done just numerically (i.e. no closed-form expressions are available for the parameter estimates), by resorting to customary optimization routines. The resulting maximum likelihood (ML) estimates can be thus derived as where is the parameter space.
Two-step maximum likelihood
This technique aims at reducing the computational burden of the previous one, by splitting the original maximization problem into two subsequent (sets of) maximizations in lower dimensions. In the first step, one estimates 1 and 2 separately, by resorting to maximum likelihood estimation, as if the two univariate components of the bivariate sample were independent, i.e. maximizing separately their marginal log-likelihood functions: with n i⋅ and n ⋅j being the observed marginal frequencies of X 1 and X 2 , respectively, thus finding the estimates ̂T S 1 and ̂T S 2 (the superscript "TS" standing for "two-step"). Then, one sets 1 =̂T S 1 and 2 =̂T S 2 in (4) and maximize it with respect to , finding ̂T S . This technique was introduced in a more general context and exhaustively described in Joe and Xu (1996), where it is also named "inference function for margins" (IFM). The authors compared the efficiency of the IFM with the ML by simulation and found that the ratio of the mean square errors of the IFM estimator to the full ML is close to 1. Theoretically, the ML estimator should be asymptotically the most efficient, since it attains the minimum asymptotic variance bound. However, for finite samples, Patton (2006) found that the IFM was often even more efficient than the ML. As a result, IFM is the main estimation method employed in estimating copula models.
Two-step maximum likelihood + method of moment
This method is directly suggested by the algorithmic procedure of Sect. 3.3. First, one estimates the marginal parameters 1 and 2 from sample data x 1i and x 2i , i = 1, … , n , independently maximizing 1 and 2 with respect to 1 and 2 , as for the previous technique, obtaining ̂T S 1 and ̂T S 2 . Then, one considers the maximum likelihood estimates of the marginal cdfs, F j (⋅) = F j (⋅;̂T S j ) , j = 1, 2 , and obtain the estimate of the dependence parameter via the method of moment, by inverting the relationship between the and Pearson's correlation: ̂T SM = g −1 (̂X 1 X 2 ;F 1 ,F 2 ) , where ̂X 1 ,X 2 is Pearson's sample correlation coefficient and TSM stands for "twostep-moment method". The evaluation of g −1 at ̂X 1 X 2 , given F 1 and F 2 , is carried out through the algorithm of Sect. 3.3.
We remark that these three estimation methods are just possible alternatives to be employed for the specific context of copula-based discrete distributions considered in this work. When dealing with copula-based distributions in the continuous case, a straightforward estimation method for the parameter of a specific bivariate parametric copula is the method of moment. It consists in considering a rank correlation between the two rvs (say, Kendall's or Spearman's ), looking for a theoretical relationship between it and the parameter of the copula, and substituting the empirical value of the rank correlation into this relationship to derive an estimate of the copula parameter. The main advantage of this method is that it does not require any assumption about the marginal distributions, since rank correlations are margin-free, i.e. they depend on the copula only and are not affected by the marginal distributions.
When dealing with factor copulas, another consolidated estimation method is represented by a sort of simulated method of moments (SMM), where the "moments" that are used in estimation are not the usual ones, but functions of rank statistics. The SMM estimator is derived as the minimizer of the distance between data dependence measures and dependence measures obtained through Monte Carlo simulation of the model (Oh and Patton 2017). In the context of hierarchical Archimedean copulas, Okhrin and Tetereva (2017) investigate a clustering estimator based on Kendall's by means of Monte Carlo simulations; it is shown to be competitive in terms of statistical properties (bias and variance) and to be computationally advantageous.
However, using these methods would not be convenient in our context, since we know rank correlations lose their nice properties, holding in the continuous set-up, due to the presence of tied values (see, for example, Nešlehová (2007)).
Monte Carlo study
The relative performance of the estimators derived through the three methods described in the previous section, expressed in terms of some statistical indicators such as bias or mean-squared error, can be assessed for finite sample size only via Monte Carlo (MC) simulation. Usually, the estimators of the marginal parameters 1 3 Inducing a desired value of correlation between two point-scale… methods have a very close statistical behaviour; on the contrary, differences are expected to arise among the estimators of the dependence parameter. Here we will examine the joint behaviour and performance of all the parameters' estimators.
For the multivariate case, we recall that the bias of an estimator ̂= (̂1, … ,̂p) � of a p-dimensional vector parameter = ( 1 , … , p ) � is defined as: is said to be an unbiased estimator of if (̂) = for any ∈ . A multivariate generalization of the mean-squared error (MSE) is provided by the MSE matrix (see, for example, Mittelhammer 2013, p.377): The MSE matrix can be decomposed into variance and bias components, analogous to the scalar case. Specifically, MSE is equal to the sum of the covariance matrix of ̂ and the outer product of the bias vector of ̂: The trace of the MSE matrix defines the expected squared distance (ESD) of the vector estimator ̂ from the vector estimand and is equal to: where " tr " denotes the trace of a matrix. Being a scalar, the ESD allows direct and easy comparison among different estimators of the same parameter vector: the lower the value of ESD, the better the estimator. Generally, for the estimator vectors corresponding to the three methods of Sect. 5, such quantities cannot be derived analytically, but an approximation can be obtained through the corresponding MC means computed over S simulation runs: with and ̂t being the value of the vector estimator for run t, t = 1, … , S . The larger the value S, the more accurate the approximation. bias(̂) ∶= (̂) − ; A MC study is designed to assess the relative statistical performance of the three types of vector estimators for a bivariate CUB model under an array of artificial scenarios, which are realized by varying the dependence structure, the CUB marginal parameters, and the sample size. We point out that this study will not allow general conclusions, but is intended merely to demonstrate how the different inferential methods work and check the potential of the proposal. As possible dependence structures, we evaluate the Gauss, Frank, and Plackett copulas: the first with parameter Ga equal to −0.6 or +0.6 ; the second with parameter = −5 or = +5 ; the last with parameter = 1∕4 or = 4 . As CUB marginal parameters, we consider m 1 = m 2 = 5 and m 1 = m 2 = 7 , combined with the following values for the marginal parameter vector M = ( 1 , 1 , 2 , 2 ) � : (0.4, 0.8, 0.4, 0.8) � , (0.4, 0.8, 0.7, 0.3) � , and (0.7, 0.3, 0.7, 0.3) � . For all the 2 × 2 × 3 = 12 combinations above, the sample sizes 50 and 100 are investigated, for a total of 24 artificial settings for each type of copula.
Note that the two-step procedures (Sects. 5.2 and 5.3) require that both empirical marginal distributions assume at least 4 distinct values in order the MLEs to be computed (Piccolo 2003). To ensure feasibility of these two techniques for any simulation run, when they occurred, we simply discarded the samples that do not respect such condition from the MC simulation and kept fixed at S = 1000 the number of feasible samples to draw and to use for the statistical analysis under each artificial setting.
Simulation results reporting the values of Ê SD MC are displayed in Tables 5a, b, and c. Other summary indices computed over the 1, 000 simulation runs, such as the MC mean of the bias vector, are available on request.
For the models with the Gauss copula, the full MLE performs the best in terms of ESD; for the other two estimators, the values of ESD are very close to each other for any setting, pointing out a slight preference towards the two-step MLE for n = 100 . There is also a relevant difference in the values of ESD across the settings; as expected, ESD decreases moving from n = 50 to n = 100 , holding the other parameters fixed. The setting with = (0.7, 0.3, 0.7, 0.3, −0.6) � and n = 100 minimizes the value of the index for all the three estimators; the setting with = (0.4, 0.8, 0.4, 0.8, −0.6) � maximizes it. For the models with the Frank copula, the two-step MLE surprisingly overtakes the full MLE for any setting; the two-step moment estimator shows the worst performance, even if for n = 100 this difference is attenuated. Significant improvement of all the three estimators occur when moving from n = 50 to n = 100 ; the values of distribution's parameters also affect the values of ESD: however, the effect of the model parameters cannot be easily extracted from the results on the examined scenarios.
For the models with the Plackett copula, we have the following interesting result: the MLE is the best performer (smallest Ê SD MC ) when = 1∕4 (corresponding to negative dependence); for the complementary scenarios, corresponding to = 4 and then positive correlation, the two-step MLE is the best performer. The twostep moment estimator has a far worse behaviour than its competitors, apart from one scenario, where it is the second best after full MLE; this is especially apparent for = 4 . Note that in this case, differently from the previous models, the values 1 3 Inducing a desired value of correlation between two point-scale… of Ê SD MC sensibly change moving from = 4 to = 1∕4 , holding fixed the other parameters; this is due to the different magnitude of the two values of here considered, whose choice was related to the different meaning of the copula parameter's values: for the Frank and Gauss copulas, changing the sign of means changing the sign to correlation while keeping its intensity fixed; this does not occur for the Plackett copula, whose parameter has to range within ℝ + (see Fig.2), leading to negative dependence if failing in (0, 1) and to positive dependence if larger than 1. Although in this MC study, the results in terms of statistical performance of the inferential technique proposed in Sect. 5.3 are overall worse than those of the two more consolidated techniques described in Sects. 5.1 and 5.2, nevertheless the former can be still useful for providing starting values for the copula parameter to the maximization routines of the two latter two.
We also measured, on a selection of artificial settings, the total computational times in minutes required by each of the three estimation methods over the S = 1000 MC runs, which are displayed in Table 6. Although the magnitude of the computation time depends on the selected dependence structure, it can be noted that in each setting the two-step method of moment is by far the least time-consuming, followed ML full maximum likelihood method, TS two-step maximum likelihood method; TSM= two-step maximum likelihood method+method of moment Table 6 Overall computation times in minutes for the three estimation methods for some artificial scenarios considered in the MC study Here, n = 100 ; m 1 = m 2 = 7 ; 1 = 0.4, 1 = 0.8, 2 = 0.7, 2 = 0.3 ; " + " and "−" indicates the copula parameter value inducing positive and negative dependence, respectively (see Tables 5a, 5b,
3
Inducing a desired value of correlation between two point-scale… by the two-step maximum likelihood method, and the full maximum likelihood method. We remark that the suggested estimation procedure is faster when moving from the Gaussian copula to the other two dependence structures.
Empirical analysis
In this section, we consider an application of the inferential techniques of Sect. 5 to real data, specifically, the survey data coming from the 2000 International Social Survey Programme (ISSP), which addressed the topic of attitudes to environmental protection and preferred government measures for environmental protection. The prefmod package (Hatzinger and Dittirch 2012) comprises the raw data structured as a dataset with 1, 595 complete observations (one for each respondent) on 11 variables, namely five socio-demographical variables (gender, location of residence, age, country, and education) and six items (with a 5-point rating scale, i.e Likert type).
Respondents from Austria and Great
Britain were asked about their perception of environmental dangers; the questions concerned air pollution caused by cars (variable CAR), air pollution caused by industry (IND), pesticides and chemicals used in farming (FARM), pollution of country's rivers, lakes, and streams (WATER), a rise in the world's temperature (TEMP), and modifying the genes of certain crops (GENE). The answers were given on a 5-point rating scale, with response categories: (1) extremely dangerous, (2) very dangerous, (3) somewhat dangerous, (4) not very dangerous, and (5) not dangerous at all for the environment. We focus on respondents from Austria only and on WATER and GENE items. The joint and marginal empirical distributions of the two items are reported in Table 7. By considering the ratings as numerical values, we can treat this distribution as a sample from a bivariate discrete rv; its sample correlation is equal to 0.2988. The nature of the data suggests using CUB as parametric family for the marginal distributions; the Gaussian, Frank and Plackett copulas are assumed as dependence structures for the joint distribution. Table 8 summarizes the estimation results obtained by applying the full maximum likelihood method and both the two-step methods. From Table 8, it is easy to note that within each dependence structure, the three estimation methods provide estimate values for the marginal parameters which are very close to each other; the estimates of the dependence parameter are slightly different. For example, under the Gaussian dependence structure, the values of the estimates of the dependence parameter Ga range between 0.343 and 0.361by the way, this confirms a positive and moderate dependence between the two observed variables, as suggested by the value of the sample correlation. Differences in terms of maximized log-likelihood emerge across the three dependence structures here examined. Among the three models, the bivariate CUB with Plackett copula (and parameters set equal to the corresponding MLEs) is that providing the best fit ( = −2006.317).
Alternative ways of comparing the goodness of fit of the three models can be taken up; for example, one can consider Aitchison distance between the empirical and theoretical probabilities = ∑ ∑ (log(p ij ∕p ij ) −L) 2 , where L = ∑ ∑ log(p ij ∕p ij )∕k , with k being the total number of points of the support of the bivariate rv. The minimum value of the Aitchison distance is achieved by the model with the Plackett copula ( = 3.830 ); the model with the Frank copula provides = 4.089 ; the Gauss copula returns = 4.390 . Other possible distances or divergences between discrete distributions are mentioned in Fossaluza et al. (2018).
An evaluation in absolute terms of the goodness-of-fit of this bivariate model can be carried out by resorting to the chi-squared statistic defined as 2 = ∑ ∑ (n obs ij − n theo ij ) 2 ∕n theo ij , with n obs ij and n theo ij indicating the observed and the theoretical frequencies, respectively. In order to accomplish the requirement that each n theo ij has to be not smaller than 5, we can proceed and collapse the last two ordered categories (4 and 5) for each variable, thus obtaining the new observed joint distribution in Table 9. There, we also reported the theoretical joint frequencies corresponding to the "best" bivariate CUB model, for which the Chi-squared statistic above takes the value 21.68, with a p-value 0.0168 (the test statistic, under the null hypothesis that the data come from the bivariate CUB model, asymptotically follows a chi-squared distribution with 16 − 1 − 5 = 10 degrees of freedom). This means that the goodness of fit of the model is hardly satisfactory; at the significance level 1%, we do not reject the null hypothesis. Looking at Table 9, discrepancies between
3
Inducing a desired value of correlation between two point-scale… observed and theoretical joint (and also marginal) frequencies are visible to the unaided eye. One can also consider not using a parametric model for the two margins and estimating them nonparametrically. In the opposite direction, other families of bivariate distributions may be tested for fit improvement; for example, cumulative link models could be used for the univariate margins (Agresti and Kateri 2019). Reasonably, introducing covariates (gender, education, age, location of residence) for the marginal (and dependence) parameters would likely increase the fit. However, we remark that the aim of this section was not so much to evaluate the fitting of bivariate (CUB) models to real data as to illustrate and compare different estimation techniques, one of which derived through a correlation-matching procedure. Besides, we remark that pooling cells, although being a viable alternative to obtain accurate p-values in some instances, should be preferably performed before the analysis is made in order to obtain a statistic with the appropriate asymptotic reference distribution; otherwise, it may distort the purpose of the analysis (Maydeu-Olivares and García-Forero 2010). Alternatively, one could resort to resampling methods (e.g. bootstrap), but unfortunately, existing evidence suggests that resampling methods do not yield accurate p-values for the 2 statistic (Tollenaar and Mooijart 2003).
Conclusions
In this work, we showed how to build a joint probability distribution with assigned discrete point-scale margins enjoying a target (feasible) value of correlation. We proposed a two-step copula-based approach: first, one selects a copula function and constructs a bivariate distribution preserving the assigned margins; then, one adjust the value of the copula parameter in order to achieve the target correlation: this leads to the elaboration of an iterative procedure whose accuracy can be set a priori, differently from other approaches in the literature that are based on some rearrangement of very huge samples drawn independently from the two margins.
The approach is designed to work with any one-parameter copula family, provided that it encompasses the entire range of dependence, thus allowing the use of other dependence structures than the Gaussian, whose use in the social sciences has been overriding. Although in this paper we considered three exchangeable and radially symmetric copulas, this feature is not necessary for the algorithm functioning. Moreover, being comprehensive is a property that makes the algorithm applicable to any feasible correlation value; however, if one is only concerned, for example, with positive correlations, then he/she can use non-comprehensive copulas as well, such as Gumbel or Clayton.
As said, the algorithm is specifically conceived for one-parameter copulas, but it can be extended to two or more-parameter copulas: in this case, some higher comoments need to be assigned along with linear correlation in order to calibrate the additional parameters.
The extension of the proposed procedure to dimension d > 2 , discussed in Sect. 3.4, is not straightforward, being limited by theoretical rather than computational reasons, related to the properties of Pearson's correlation matrix for a d-variate random vector.
Future research will explore these aspects.
Supplementary material
The R code implementing the algorithm of Section 3, along with the example of Section 4, is provided as supplementary material here: https:// tinyu rl. com/ ASTB-D-19-00215.
Funding Open access funding provided by Università degli Studi di Milano within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 12,720.4 | 2021-05-29T00:00:00.000 | [
"Mathematics"
] |
Molecular and Systems Biology Approaches for Harnessing the Symbiotic Interaction in Mycorrhizal Symbiosis for Grain and Oil Crop Cultivation
Mycorrhizal symbiosis, the mutually beneficial association between plants and fungi, has gained significant attention in recent years due to its widespread significance in agricultural productivity. Specifically, arbuscular mycorrhizal fungi (AMF) provide a range of benefits to grain and oil crops, including improved nutrient uptake, growth, and resistance to (a)biotic stressors. Harnessing this symbiotic interaction using molecular and systems biology approaches presents promising opportunities for sustainable and economically-viable agricultural practices. Research in this area aims to identify and manipulate specific genes and pathways involved in the symbiotic interaction, leading to improved cereal and oilseed crop yields and nutrient acquisition. This review provides an overview of the research frontier on utilizing molecular and systems biology approaches for harnessing the symbiotic interaction in mycorrhizal symbiosis for grain and oil crop cultivation. Moreover, we address the mechanistic insights and molecular determinants underpinning this exchange. We conclude with an overview of current efforts to harness mycorrhizal diversity to improve cereal and oilseed health through systems biology.
Introduction
Humans depend on agricultural crops rich in starches, oils, and proteins to meet their food and fodder needs.Within the group of grasses known as monocots, several cereal crops have been domesticated to yield starch-rich grain seeds.Two types have been distinguished-cereals that contain gluten and are generally used for bread-making (wheat, oats, barley, rye) and cereals that do not contain gluten (rice, maize).Conversely, oilseed crops, primarily cultivated for the oil content present in their seeds, belong mostly to the dicot group.Oilseed crop seeds (sunflower, rapeseed, peanut, sesame, ground pea) are composed of 40-50% oil and 20-30% protein, while proteo-oil crop seeds (soybean, lupine) are composed of 15-30% oil and 30-40% protein.The demand for vegetable oil and cereal is increasingly rapidly worldwide due to a continually growing global population.Additionally, the increasing homogeneity in the compositions of national food supplies across the world implies that the production of food for mankind relies heavily on a small number of these crops [1][2][3].The plateauing of grain yields in major cereal and oilseed crop production regions during the 21st century is concerning and has been attributed to additional factors such as climate change, decreasing land fertility/biodiversity, soil contamination, and agricultural mismanagement [4,5].All these trends could have serious implications for the future of food security and agricultural sustainability in these regions, highlighting the need for innovative solutions to address these challenges.
In addition to bolstering food security, it is crucial to preserve and enhance soil biodiversity and functionality through sustainable management techniques.The utilization of arbuscular mycorrhizal fungi (AMF) presents a promising approach to achieving sustainable agriculture and promoting global food security.The partnerships between AMF and land plants are among the most widespread and ecologically important symbioses on Earth.These fungi are obligate symbiotic microbes that integrate their hyphae-associated microbial communities, which can extend beyond plant root systems to exploit organic soil nutrients, thereby enhancing nutrient uptake efficiency (NUE) through the mycorrhizal pathway.These nutrients, together with water [6], are delivered to hosts in exchange for photosynthetically derived carbohydrates and fats during their intraradical phase, when highly branched structures called arbuscules, the workhorse of the symbiosis, are formed in cortical cells of the host root (for reviews see [7][8][9][10]).For successful arbuscular mycorrhizal (AM) symbiosis to occur and facilitate nutrient exchange, a molecular signal recognition process must take place between both partners, leading to signaling transduction and transcriptional reprogramming within plant cells.The process of AM fungal hyphae invading host roots is highly complex and regulated, yet our understanding of the molecular mechanisms that control the development of mycorrhizal infection structures is limited due to the lack of genetic tools available for studying AMF.In contrast, there has been more progress in discerning the means by which cereal and oilseed crops orchestrate these processes.
The release of plant hormones (strigolactones; SLs) may be the initial step in the reciprocal recognition process that activates fungal metabolism and branching.The establishment of a stable and productive symbiosis depends on a complex interplay between molecular signaling and transcriptional reprogramming that occurs between the hyphae and the roots.The recognition of molecular signals is crucial for establishing and maintaining the symbiosis.These molecular signals include lipochitooligosaccharides (LCOs) and chitooligomers (COs) produced by AMF [11,12], which serve as a key signal for plant defense responses and aid in the recognition of the symbiont by the plant host.A critical aspect of the signaling process is the presence of chitin-binding Lysin Motifs (LysM) with receptor-like kinases (RLKs) in the plant that recognize fungal-produced LCOs [13,14].The binding of LCOs to LysM RLKs initiates downstream signaling and triggers various signaling pathways that orchestrate the transcriptional reprogramming of plant cells.The binding of LCOs to LysM RLKs also triggers the activation of calcium signaling, which is an essential component of AM symbiosis establishment, leading to the activation of the common symbiosis signalling pathway (CSSP) [15].The transcriptional reprogramming observed in mycorrhizal cereal and oilseed plants involves multiple genes and pathways and is critical for the overall success of the symbiosis (see below sections).These transcriptional changes are crucial for creating an environment that supports signal perception and the exchange of nutrients between the plant and fungal partner.
Harnessing mycorrhizal symbiosis holds great promise for improving the fitness and quality/functionality of cereal and oilseed crops, and molecular and systems biology approaches are key in unlocking its full potential.Given the persistence of AMF across the vast majority of modern land plants, including cereal and oilseed crops, AM associations likely continue to play an important, yet largely unrecognized, role in modulating the food system and global climate, and promoting sustainability through their ability to impact terrestrial biogeochemical cycling of nutrients and carbon (C), enhance nutrient uptake, crop yield and quality, plant resilience to environmental stressors, soil quality, and reduce agricultural inputs.Here, we emphasize the contribution of AMF to the benefits of cereal and oilseed crops across disciplines.First, we describe the common signalling processes used by cereal and oilseeds plants during mutualistic interactions with AMF by exploring the aid in this recognition process, communication, and counter-communication systems that have been established to determine the degree of ingress of the AMF into their hosts.We then explore the molecular mechanisms by which AMF contribute to unleaching nutrients in cereal and oilseed crops.Finally, we identify research avenues that would further develop our understanding of AMF dynamics via plant-microbe pathways.The signaling between cereal and oilseed plants and AMF is crucial for establishing and maintaining mycorrhizal symbiosis.Presymbiotic communication is necessary for the development of this association, depending on signal exchange between host plants and AMF.SLs trigger fungal metabolism and hyphal branching [16] (Figure 1).Before appressoria formation, the host plant root emits SLs into the rhizosphere to activate spore germination and hyphal growth of AMF, ensuring contact between AMF and host plant roots.SL production is promoted in P-deficient conditions from carotenoid in plastids, involving several proteins, namely carotenoid isomerase DWARF27 (D27), CAROTENOID CLEAVAGE DIOXYGENASE7 (CCD7), CCD8, and the cytochrome P 450 homolog MORE AXILLARY GROWTH1 (MAX1) [17].Nadal et al. [18] revealed that SLs may not be the only important signaling molecules required for AMF priming.The nitrogen oxide (NO) PERCEPTION1 (NOPE1) transporter was recently shown to be required for fungal priming during the precontact phase in maize and rice.NOPE1 designates a transport protein of the Major Facilitator Superfamily capable of transporting N-acetylglucosamine (GlcNAc), thus demonstrating for the first time such transport activity in a plant protein.The presence of NOPE1 mutants reveals no interaction with AMF, just as their root exudates do not trigger transcriptional responses in AMF, leading the authors to speculate that NOPE1 transports a plant-derived GlcNAc molecule that triggers signaling in AMF to promote symbiosis creation.Despite an incomplete understanding of the relationship between these early signals, it is clear that they play an important role in the establishment of symbiosis.
Molecular Basis of Cereal and
Simultaneously, Myc factors are secreted by AMF, including short-chain chitooligosaccharides (Cos) such as CO4 and CO5, and sulfated and nonsulfated LCOs, which play a key role in stimulating host presymbiotic responses [11,12].Together, COs and LCOs are chemically closely similar to CO7 and CO8, considered as microbe-associated molecular patterns (MAMPs) inducing plant pattern-triggered immunity (PTI) [19][20][21].The process of receptor-mediated perception involves specific receptors on the surface of plant root cells capable of perceiving these chemical signals.When the fungal signals are detected by the plant's receptors, it triggers a signaling cascade within the plant, leading to the activation of various molecular and genetic responses, including changes in gene expression, root architecture, and the production of specialized structures (e.g., arbuscules).
It has been demonstrated that Chitin Elicitor Receptor Kinase1 in rice (OsCERK1), a LysM-containing receptor-like protein kinase (RLK), is implicated in mycorrhizal establishment [22,23] and the recognition of COs, as shown by Carotenuto et al. [24], who observed reduced CO-dependent nuclear Ca 2+ oscillations when using mutated OsCERK rice.Importantly, given that OsCERK1 is unable to bind to CO4 and CO5 and that the interaction between chitin elicitor-binding protein (OsCEBiP) and OsCERK1 induces immune signaling [25,26], it is expected that there are other proteins, apart from CEBiP, that interact with OsCERK1 for Myc factor recognition [27].Recent research has revealed that Myc factor receptor 1 (OsMYR1) is an OsCERK1-binding molecule for Myc-CO4 recognition, as evidenced by the reduced symbiotic responses and limited AMF colonization in the Osmyr1 mutant [28].The molecular mechanism used by the OsMYR1/OsCERK1 complex to induce symbiosis instead of an immunological response was also identified in the same study [28].Rice sensitivity to MAMPs is found to be reduced by the symbiotic receptor OsMYR1, as it prevents the formation of the OsCERK1-OsCEBiP complex and blocks OsCERK1 from phosphorylating the OsGEF1 substrate downstream [29].It is believed that the recognition of Myc factors by RLKs, a pattern recognition receptor (PRR), contributes to the formation of mycorrhizal symbiosis through the activation of symbiotic responses such as significant nuclear Ca 2+ oscillations and the transcriptional regulation of mycorrhizaeassociated genes in the rhizodermis cells [10,30].Additionally, Gutjahr et al. [31] identified a receptor for karrikin (a plant growth regulator) in rice as a required signaling element for mycorrhizal establishment.In a previous study, Shrivastava et al. [32] identified a LysM domain-containing GPI-anchored protein (M4CCU0) in the rapeseed root colonized with Piriformospora indica.This LysM domain matches the chitin recognition strategy revealed in rice [33,34].The same authors identified another protein, LRR-RLK At1g51890 (M4DQZ2), which facilitates the perception of various fungal molecules including chitin, small peptides, and proteins [24].It has been demonstrated that Chitin Elicitor Receptor Kinase1 in rice (OsCERK1), a LysM-containing receptor-like protein kinase (RLK), is implicated in mycorrhizal establishment [22,23] and the recognition of COs, as shown by Carotenuto et al. [24], who observed reduced CO-dependent nuclear Ca 2+ oscillations when using mutated OsCERK rice.Importantly, given that OsCERK1 is unable to bind to CO4 and CO5 and that the interaction between chitin elicitor-binding protein (OsCEBiP) and OsCERK1 induces immune signaling [25,26], it is expected that there are other proteins, apart from CEBiP, that interact with OsCERK1 for Myc factor recognition [27].Recent research has revealed that Myc factor receptor 1 (OsMYR1) is an OsCERK1-binding molecule for Myc-CO4 recognition, as evidenced by the reduced symbiotic responses and limited AMF colonization in the Osmyr1 mutant [28].The molecular mechanism used by the OsMYR1/OsCERK1 complex to induce symbiosis instead of an immunological response was also identified in the same study [28].Rice sensitivity to MAMPs is found to be reduced by the symbiotic receptor OsMYR1, as it prevents the formation of the OsCERK1-OsCEBiP complex and blocks OsCERK1 from phosphorylating the OsGEF1 substrate downstream [29].It is believed that the recognition of Myc factors by RLKs, a pattern recognition receptor (PRR), Further research is required to fully understand the complex mechanisms used by cereal and oilseed plants to detect AMF signals.A comparative analysis of the transcriptomics, proteomics, and structural analyses of several cereal and oilseed model plants during mycorrhizal symbiosis could provide a better understanding of this aspect of symbiosis establishment.Moreover, understanding the receptor-mediated recognition of AMF signals in cereals and oilseeds is crucial for optimizing the establishment and functioning of mycorrhizal symbiosis in agricultural systems.This understanding can enhance nutrient uptake, improve plant growth and health, and potentially reduce the need for chemical fertilizers.
Advances in Signal Transduction Pathways of AMF in Cereals and Oilseeds
The signal transduction pathways involved in the symbiosis between AMF and cereal and oilseed crops are intricate and multifaceted.The formation of an appropriate transcriptional response in rice roots exposed to germinated spore exudates depends on the alpha/beta-fold hydrolase and putative receptor protein DWARF 14 LIKE (D14L) [31].However, the mutant transcriptional profiles are consistent with D14L-mediated signaling occurring during a very early stage of symbiosis [31].D14L is a homolog of the Arabidopsis KARRIKIN INSENSITIVE2 (KAI2) protein, which controls protein turnover following KARRIKIN signaling by working with the F-box protein MORE AXILLIARY GROWTH2 (MAX2) [35,36].Karrikins are butenolide molecules produced when plant tissues burn during a fire and are related to SLs; dormant seeds that detect karrikins are triggered to germinate in fire-chasing species that grow quickly to take advantage of a lack of competition after a fire [37].The phenotypes observed in kai2 mutants, unrelated to karrikin perception itself, and the wide conservation of KAI2 in plants, have led to the hypothesis that the KAI2 receptor recognizes and binds to an as-yet-unidentified endogenous ligand, probably a phytohormone structurally related to karrikins and SLs [36,38].In rice, it has been observed that D14L is required in AMF symbiosis, in association with an earlier report of a d3 mutant (MAX2 homolog) that is unable to maintain AMF symbiosis [31,39].Additionally, previous studies have revealed that the SL receptor D14L and its genetically related relatives, the KAI2/D14L and DLK2 receptors, play significant roles in mycorrhizal symbiosis [31,40].Gutjahr et al. [31] consider that D14L plays an essential role in AMF symbiosis in rice and is necessary for the initiation of AM symbiosis.The hypothetical molecule likely to bind to and be recognized by the D14L receptor is provisionally named KAI2 ligand (KL).More recently, symbiosis-specific downstream responses triggered by AMF molecules are likely to initiate events in the root cells of a host plant.
Following the recognition of MAMPs, the plant engages signaling modules, including mitogen-activated protein kinases (MAPKs) and calcium-dependent protein kinases (CDPKs) [41].Activation of these protein kinases triggers a signal cascade responsible for the activation of specific TFs, leading to the induction of multiple intracellular defense responses.The induction of the signal cascade is either hormone-dependent (MAPK) or independent.
The OsCERK1 kinase domain is associated with another protein, receptor-like cytoplasmic kinase 185 (RLCK185).OsRLCK185 constitutes a target protein of Xoo1488, an effector of the rice pathogen Xanthomonas oryzae [42].This protein belongs to subfamily VII of RLCK.OsCERK1 associates with OsRLCK185 and phosphorylates it after chitin treatment [42].In response to chitin and peptidoglycan (PGN), phosphorylation of Os-RLCK185 by OsCERK1 activates the MAPK cascade, including OsMPK3 and OsMPK6 [42].Additionally, RLCK176, another member of RLCK subfamily VII, functions downstream of OsCERK1 in PGN and chitin signaling pathways [43].In Arabidopsis, BIK1, which belongs to the same RLCK subfamily, also acts in concert with various RKs, such as the flagellin receptor FLAGELLIN SENSING 2 (FLS2) and the EF-Tu receptor (EFR), similar to OsCERK1 and OsRLCK185/176.The combination of RKs and RLCKs appears to be a common module for MAMP-triggered immunity (MTI) signaling in plants.Ca 2+ influx acts as a second messenger in plant immunity and significantly contributes to the regulation of various defense responses [44,45].The different Ca 2+ compartments are likely to contribute to distinct behaviors after pathogen and symbiont perception.Overall, two signaling complexes, OsRacGEF1-OsRac1 and OsRLCK-OsMAPKs, are involved downstream of OsCERK1 in rice chitin-triggered immunity.
Chitosan is widely distributed in nature, especially as a structural constituent of fungal cell walls [46].Fungal pathogens replace their cell wall components to avoid degradation by lytic enzymes when invading host plant cells, and deacetylation of cell wall chitin to chitosan (poly-GlcNAc) is a likely pathogen infection strategy.Chitosan is one of the MAMPs in plants, inducing the MTI that triggers systemic acquired resistance (SAR) [47].In soybean, chitosan induces Ca 2+ influx into the cytoplasm and ROS production within minutes [48].In Cocos nucifera calli, chitosan triggers MAPK-type proteins, inducing the expression of defense-related genes [47].In rice, the role of deacetylated chitosan oligomers (chitooligosaccharides) as MAPKs remains unclear.
Following the development of symbiosis between host plant root cortex cells and AMF, the differential expression of numerous genes involved in mycorrhizal development, nutrient transport, and symbiotic signaling is regulated.Reduced Arbuscular Mycorrhiza1 (RAM1) is directly or indirectly involved in regulating a large number of these genes [8].However, the extensive induction of TFs induced by symbiosis indicates the presence of a complex regulation that is not yet well understood [49,50].This transcriptional response underlines the cellular changes essential for the reception of the fungal endosymbiont, with genetic analyses progressively revealing the role of each gene.
The deposition of the periarbuscular membrane (PAM) represents one of the most significant modifications to the colonized cell and is accomplished through polarized exocytosis, involving the EXOCYST complex, a unique splice variant of SYP132 specific to the symbiosis process [51,52], the plant-specific Vapyrin protein [53][54][55], and two symbiosisspecific VAMP721 proteins [56].The protein composition of the PAM differs from that of the plasma membrane and includes unique phosphate, ammonium, and sugar transporters, whose transport activity is stimulated by the proton gradient resulting from a symbiosisinduced proton ATPase residing in the PAM [57][58][59] and ABC transporters [60,61], which may be involved in export.Strikingly, the transport of these transporters to the PAM occurs by default owing to coinciding gene expression and protein production with the deposition of the PAM around arbuscular branches [62].Therefore, stringent transcriptional regulation of transporter genes is essential not only to ensure their expression in the appropriate cell type but also to ensure their localization in the PAM.
The establishment of the arbuscule involves not only the development of the PAM and apoplast, but also, later on, the active dismantling and removal of the membrane, arbuscule, and interface during a senescent phase known as arbuscule degeneration.Furthermore, the degeneration phase runs parallel to the expression of secreted hydrolase genes, regulated by a TF MYB, as well as the GRAS factors NSP1 and DELLA [63].The presence of the latter two proteins is also shown to be necessary for arbuscular development, suggesting that changes in the composition of a TF complex are likely to regulate the transition between the developmental and degenerative phases of the accommodation program.
Nutrient Uptake
Numerous studies have documented the association of cereals and oilseeds with AMF for nutrient acquisition and water uptake [64,65].The symbiotic relationship between AMF and cereal and oilseed crops is centered around nutrient exchange.AMF supply cereals and oilseeds with mineral nutrients and water while receiving carbon sources in return [66].This mutual trade is crucial for the functioning of terrestrial ecosystems and contributes to increased productivity of these crops [67,68].AMF can maintain approximately 90% of the plant's P and 60% of its N [69].After the establishment of AM symbiosis, mycorrhizal plants utilize two pathways for nutrient uptake.They can directly absorb nutrients from the soil through root hairs and the root epidermis or indirectly acquire nutrients through the AM fungal hyphae at the interface between the plant and the fungus.According to Dai et al. [70], Glomus species increased N and P uptake in organic field wheat by nearly 2.3 times compared to the typical method in the Canadian prairie.Under alkaline conditions, maize plants inoculated with R. irregularis developed longer roots and higher P absorption [71].Inoculation with R. irregularis enhances P and Fe concentration in sorghum grains and harvest indices in the mature stage [72].The relationship between oilseeds and AMF has been shown to enhance the uptake of P and other nutrients [73].Bellido et al. [74] demonstrated that sunflowers inoculated with R. irregularis exhibited the highest N, P, K, and Mg content compared to non-AMF plants.According to Yadav et al. [75], F. mosseae and Acaulospora laevis increased P, K, Ca, Fe, and Mg in sesame seeds compared to the control.In another study by Dabré et al. [76], R. irregularis boosted N and P level in Glycine max plants.
Transporters Involved in Nutrient Exchange in the Symbiosis of AMF and Cereal and Oilseed Crops
The primary benefit of establishing mycorrhizal symbiosis for plants is improved nutrient uptake [77].The PAM, which surrounds the arbuscules, contains a range of specific proteins responsible for nutrient absorption (Figure 2), with P i transporters being the most widely investigated [78].Plants have two types of Pi uptake systems: high-affinity and low-affinity P i uptake systems.Phosphate Transporter1 (PHT1) is a H + /P i symporter with high P i affinity, playing a key role in P i absorption by the plant roots [79].On the fungal level, a number of phosphate transporters appear to be responsible for the initial stage of symbiotic P i transport.They have been characterized based on transcriptomic and genomic data: GmosPT from F. mosseae, GiPT from R. intraradices, and GigmPT from Gigaspora margarita [80][81][82].These phosphate transporters are all expressed in the extraradical mycelium, where they are likely involved in the uptake of P from the soil [83].GmosPT and GigmPT were also expressed in intraradical hyphae, where they are believed to be active in P reabsorption from the periarbuscular space (PAS) [80,84].
laris exhibited the highest N, P, K, and Mg content compared to non-AMF plants.According to Yadav et al. [75], F. mosseae and Acaulospora laevis increased P, K, Ca, Fe, and Mg in sesame seeds compared to the control.In another study by Dabré et al. [76], R. irregularis boosted N and P level in Glycine max plants.
Transporters Involved in Nutrient Exchange in the Symbiosis of AMF and Cereal and Oilseed Crops
The primary benefit of establishing mycorrhizal symbiosis for plants is improved nutrient uptake [77].The PAM, which surrounds the arbuscules, contains a range of specific proteins responsible for nutrient absorption (Figure 2), with Pi transporters being the most widely investigated [78].Plants have two types of Pi uptake systems: high-affinity and low-affinity Pi uptake systems.Phosphate Transporter1 (PHT1) is a H + /Pi symporter with high Pi affinity, playing a key role in Pi absorption by the plant roots [79].On the fungal level, a number of phosphate transporters appear to be responsible for the initial stage of symbiotic Pi transport.They have been characterized based on transcriptomic and genomic data: GmosPT from F. mosseae, GiPT from R. intraradices, and GigmPT from Gigaspora margarita [80][81][82].These phosphate transporters are all expressed in the extraradical mycelium, where they are likely involved in the uptake of P from the soil [83].GmosPT and GigmPT were also expressed in intraradical hyphae, where they are believed to be active in P reabsorption from the periarbuscular space (PAS) [80,84].After phosphate, the importance of N absorption in AM symbiosis has also been revealed more recently, with a significant role played both in plant nutrition and in regulating the functioning of the symbiosis itself [83].Two protein families: (i) ammonium After phosphate, the importance of N absorption in AM symbiosis has also been revealed more recently, with a significant role played both in plant nutrition and in regulating the functioning of the symbiosis itself [83].Two protein families: (i) ammonium transporters (AMTs) and (ii) Nitrate Transporter 1/Peptide Transporter Family (NPF), were found to be transcriptionally induced in various plant species when inoculated with AMF [85,86].Several mycorrhiza-inducible AMTs have been identified in different species, such as GmAMT4.1 in arbusculated cortical root cells of G. max [85], and SbAMT3;1 as a potential transporter involved in ammonium uptake from the PAS in S. bicolor [87].Down-regulation of the SbAMT3;1 protein led to a reduction in nutrient flux from the AM fungus to the host and interrupted plant growth promotion after fungal inoculation [87].Ammonium transfer through the PAM was not the only pathway for symbiotic N uptake.Recent studies have revealed the existence of a conserved mycorrhizal pathway for nitrate uptake, at least in certain species, such as OsNPF4.5 in O. Sativa, ZmNPF4.5 in Z. mays, and SbNPF4.5 in S. bicolor, which have been shown to transport nitrate and are transcriptionally up-regulated during AM colonization [88].AMF are also capable of acquiring organic N from the soil; an amino acid permease, GmosAAP1 and a dipeptide transporter RiPTR2, have been identified from F. mosseae and R. irregularis, respectively [89,90].
The improvement of plant nutrition through AM interactions is not restricted to the provision of P and N. Other elements, such as Fe and Zn, play an essential role in plant nutrition as vital micronutrients.Table 1 summarizes a list of transporters involved in nutrient exchange during AMF-cereal and AMF-oilseed crop symbiosis.Some putative fungal transporters have been characterized (Table 1); for example, GintZnT1 from the extraradical mycelium of R. irregularis, with a predicted function in fungal Zn homeostasis [91].For the same element, ZIP13, a member of the ZRT family, and IRT-like Protein from barley (Hordeum vulgare), which encodes a potential Zn transporter, were revealed to be up-regulated during AM symbiosis [92].
Early reports demonstrated that sugars may be transported from the host plant to the AM fungus.However, the exact underlying mechanism of carbohydrate influx into the apoplastic compartment during AM symbiosis is strikingly limited.Recently, a novel class of sugar (sucrose and monosaccharide) transporters has been identified, which presumably mediate sugar efflux from the plant in symbiotic interactions [78].These transporters are proteins of the Sugars Will Eventually be Exported Transporter (SWEET) family, which have been suggested as promising candidates for symbiotic sugar exchange [78].In the same species, two other fungal sugar transporters (RiMST5 and RiMST6) were also found to be presented in the extraradical mycelium and were involved in the direct uptake of monosaccharides from the soil [93].
For a long time, it was believed that AMF utilized host-derived carbohydrates to generate lipids, the main form of C storage and movement in the mycobiont [94].Surprisingly, genome analyses of R. irregularis and the transcriptome of Gigaspora rosea have revealed the absence of the cytoplasmic fatty acid (FA) synthase type I (FAS-I) complex necessary for FA synthesis [95,96].Nevertheless, FA elongation and desaturation, as well as complex lipid production, occur in AMF [95].It has been proposed that the C16:0 compounds sn2-monoacylglycerol (sn2-MAG), which are structurally analogous to cutin precursors, are translocated from plants to fungi before conversion to other lipids [97].Half-size ABCG transporters appear to be promising candidates for lipid export to the symbiotic interface.In particular, STR (Stunted Arbuscule) 1 and STR2, which belong to the ABCG subfamily and are unique to mycorrhizal plants [98], have been shown to be crucial for arbuscule formation in O. sativa.They have been shown to function as heterodimers and to localize specifically to the PAM, and their dysfunction contributes to a stunted arbuscule phenotype [60].Numerous studies have reported significant transcriptional changes elicited in the plant host at all stages of colonization.Signaling, protein metabolism, nutrition transport, secondary metabolite biosynthesis, cell wall modification, and lipid metabolism constitute the majority of the regulated genes (Figure 2).Additionally, a substantial number of genes encoding putative transcriptional regulators are differentially expressed in AMF-colonized roots, suggesting that the development of the mycorrhiza is under the control of complex transcriptional network, where the GRAS gene family plays a significant role [104,105].
The CSSP is one of the main pathways of the transcriptional control of symbiotic genes, which is induced upon recognition of AMF signals and engaged during mycorrhiza establishment.CCaMK/DMI3, a calcium/calmodulin-dependent protein kinase, decodes nuclear calcium oscillations produced in plant root cells in response to external symbiotic signals, including Myc-LCOs [106].Together with NSP2 and the CYCLOPS-CCaMK-DELLA complex, CYCLOPS binds the RAM1 promoter and induces RAM1 expression [107], among other potential direct target promoters.RAM1 encodes a key TF required for arbuscule development.The GRAS protein NSP1 is required for a portion of the Myc-LCOs and the Myc-COs response [108,109], most likely by forming a regulatory module with NSP2 and the CYCLOPS-CCaMK-DELLA complex [110].
Recent studies have unveiled the pivotal role of phosphate starvation signaling in the transcriptional control of symbiotic genes, in addition to the CSSP initiated upon perception of arbuscular mycorrhizal fungi (AMF) signals and mediated by CYCLOPS/IPD3.The PHR (Phosphate Starvation Response) TFs regulate AM-related genes, as demonstrated by a hybrid experiment conducted by Shi et al. [111].Furthermore, computational analysis by these authors revealed that 42% of the promoter regions of AM-regulated genes in rice contain P1BS (PHR1 Binding Site) motifs, providing strong evidence of the crucial role of Pi starvation in the transcriptional activation of a broad array of AM-symbiotic genes.Recent studies in rice have significantly enhanced our understanding of the molecular mechanisms underlying AM transcriptional control by Pi starvation [111,112].Specifically, the PHR TFs from the MYB family bind to P1BS motifs in the promoters of "Pi starvation response-induced genes" under Pi-limiting conditions, thereby triggering the Pi starvation response.Conversely, under high Pi conditions, SPX proteins prevent PHR from binding to P1BS, thereby inhibiting the induction of phosphate starvation-induced genes and mycorrhizal infection [112].According to Das et al. [112], PHR2, a key transcriptional regulator of phosphate starvation responses in rice, governs AM symbiosis establishment.PHR2 is essential for root colonization, mycorrhizal phosphate uptake, and yield growth in field soil.Root colonization of phr2 mutants is significantly diminished.Guo et al. [113] identified Arbuscule Development Kinase 1 (OsADK1), a novel rice kinase gene crucial for R. irregularis arbuscule development.A mutation in OsADK1 could significantly impact the AM symbiotic program, affecting numerous vital TFs such as RAM1 and WRI5.Gu et al. [114] discovered that in a comparative transcriptomic analysis of maize seedlings grown under Cd stress with or without AMF inoculation, hundreds of genes involved in glutathione metabolism, the MAPK signaling pathway, and plant hormone signal transduction were enriched.
Apart from TF-mediated transcriptional regulation, post-transcriptional mechanisms for AM gene expression regulation have also been identified.MicroRNAs (miRNAs) are non-translated RNA molecules with a length of 21-24 nucleotides that regulate their target genes by preventing their transcription or translation.For instance, miRNAs from the miR171 family, particularly the microRNA miR171h, appear to play a role in maintaining the balance of AM colonization [115,116].The underlying mechanism may involve the capability of miR171h to cleave NSP2 transcripts, which encode the NSP2 TF involved in SL biosynthesis and are necessary for proper mycorrhizal colonization [11,117].Conversely, LOM1 transcripts, which also encode a GRAS TF required for root colonization, are positively regulated by another member of the miR171 family, miR171b.In this manner, miR171b stimulates AM symbiosis, likely by safeguarding LOM1 transcripts from negative regulation by other miR171 members [116].
Through high-throughput sequencing of small RNAs (sRNAs) in maize roots colonized by AMF, Xu et al. [118] identified 155 known and 28 new miRNAs.Twelve of the fourteen significantly down-regulated miRNAs belonged to the miR399 family, while two miRNAs were markedly up-regulated in response to the R. intraradices inoculation, indicating potential functions for these miRNAs in AM symbiosis.Pathway and network studies suggest that the differentially expressed miRNAs may control phosphate starvation response and lipid metabolism in maize during the symbiosis process through their target genes.
Genomics-based approaches in mycorrhizal symbiosis in cereals/oilseeds: a brief insight
Various functional genomics techniques have been employed to identify and investigate gene expression changes and the regulatory networks involved in AMF symbiosis in cereals/oilseeds.Microarrays, a hybridization-based approach, have enabled the simultaneous analysis of the expression levels of thousands of genes in response to mycorrhizal colonization, facilitating relevant comparisons in gene expression profiles between colonized and non-colonized roots [119][120][121].This approach has provided a comprehensive overview of the novel identified genes and the activated/repressed pathways during symbiosis [122,123].Through such transcriptional analyses, several differentially expressed genes (DEGs) have been identified in mycorrhized roots of rice [124], wheat [125], soybean [126], and sunflower [127], offering valuable information on mycorrhizal regulatedtranscripts in the distinct developmental stages of AMF symbiosis.RNA sequencing (RNA-seq) has revolutionized gene expression analysis and emerged as a relevant tool due to its clear advantages over other existing transcriptomic approaches [128].One of the most notable features of this next-generation sequencing (NGS) technology is its ability to rapidly and comprehensively detect whole transcripts in (non)-mycorrhizal tissues with unprecedented depth and accuracy in measurements [127,[129][130][131].
To leverage the wealth of genomic data generated through these 'transcriptomics' approaches, several computational-based methods have been recently developed to support deeper investigations into plant-AMF interactions.The most effective numerous computational approaches are those that provide an optimal framework for 'in silico' gene expression studies with minimal errors, in a fast and accurate manner, and with extensive data storage [132][133][134].For instance, the use of transcriptome assembly tools has facilitated the identification of numerous genes and non-coding RNAs that are differentially regulated during mycorrhizal colonization [135].Functional annotation of these transcripts offers insights into the potential roles they play in the symbiosis [136].De novo transcriptome assembly also enables comparisons among plant species/genotypes exhibiting different degrees of AMF dependencies.Such comparative analysis, when combined with other phylogenomic approaches, may identify the conserved core set of genes among plant species/genotypes that the intricate symbiosis process could require [137,138].Notably, functional genomics introduces new variations by knocking down or overexpressing a gene of interest and comparing the downstream phenotypic effects of the mutant strain to the WT [139].These methods are employed to uncover the roles of potential genes, on both the plant and fungal sides, in the establishment and functioning of symbiosis.The CRISPR/Cas9 (clustered regularly interspaced short palindromic repeats) technology has been extensively applied in AMF-plant interactions.It involves inducing targeted and heritable mutations to putative genes to better assess their involvement in the symbiotic process.
Thanks to these gene-modifying methods, several genes with previously unknown functions in AMF symbiosis have been uncovered.These include genes involved in SL pathways in sorghum [140] and those included in lipid biosynthesis pathways [141].In addition to gene expression studies, phenotypic changes associated with mycorrhizal symbiosis are also considered as a relevant tool to evaluate the success of AMF symbiosis.Diverse structural and morphological changes occurring during the symbiosis process are related to different aspects, i.e., plant biomass, root morphology and architecture, leaf characteristics, fruit traits, and photosynthetic efficiency.As a nondestructive approach, plant phenotyping is instrumental in (i) simultaneously tracking these numerous traits over time, (ii) screening several species and genotypes with high performance in response to different AMF species, and (iii) identifying the best plant-AMF combinations in terms of mycorrhizal responsiveness for subsequent assessment at the molecular level.High throughput plant phenotyping, as a cutting-edge technology, now provides accurate information with high biological significance that researchers can rely on before engaging other complementary and costly genomic profiling studies.The significant knowledge advances made in cereal/oilseed-AMF interactions have been owed to each of these complementary approaches, regardless of their differing weaknesses and limitations, which have allowed the benefits provided by AMF to be investigated more in depth.
• AMF symbiosis establishment
The application of functional genomic tools has facilitated the identification of relevant plant and fungal genes that are crucial for the initiation and functioning of AM symbiosis.Despite the tremendous diversity of AMF and host plants, certain sequence steps leading to AM symbiosis remain highly conserved, irrespective of the combinations of fungal and plant species.While this intricate interaction has been extensively studied elsewhere [142,143], we emphasize here the key events that involve specific genes in cereal and oilseed crops.The major focus on the genes involved in the key steps in AMF symbiosis has largely been directed towards model plants for which the genome is fully sequenced (i.e., Arabidopsis) and/or those for which comparative studies on both mycorrhiza and rhizobia symbiosis are more feasible (e.g., M. trunculata and L. japonica).Few studies have been reported on the plant species that are the focus of this review.In the following section, we attempt to provide an overview of some distinct genes in AMF symbiosis in light of the knowledge obtained from other, more extensively investigated plant species.
Pre-infection stage: many 'molecules' recognize each other
The well-defined stages of the symbiosis are initiated following the recognition of plant-derived SLs that promote AMF growth and the production of fungal signals, Cos, and LCO [16,144,145].The fungal perception machinery for plant SLs, which has not been fully identified yet, is of major importance for understanding SL-related mechanisms that prepare the fungus for symbiosis.To date, approximately 30 SLs have been isolated from the root exudates of different plant species [146].Studies focused on SL biosynthesis have identified orthologs of CCD7 and CCD8 genes in many plant species such as rice, maize, and sorghum [140,147,148].CCD7 and CCD8 are key proteins involved in the early steps of SL biosynthesis [149].
The role of SLs as the rhizosphere signals in attracting AM fungi has been demonstrated in several plants through studies on mutants with SL deficiency or insensitivity and upon SL analog exposure.Mutation in rice D14, encoding α/β-fold hydrolase, a superfamily protein in signaling or the bioactivation of SLs downstream of their synthesis, led to SL insensitivity and high SL synthesis which increased the branching phenotype compared to the wild type (WT)) plants [150].Furthermore, mutation in a closely related homologue of D14 termed D14L abolished hyphal physical contact attempts and led to the absence of transcriptional responses to fungal signals.These findings highlight the important role of SL involvement in the control of early steps of AM interactions.
Cutin is also considered as a root exudate promoting AMF symbiosis [151].Being a product of esterification of cutin monomers into polyester compounds, this substance forms an outer hydrophobic layer acting as fencing at the aerial plant parts to prevent moisture loss [152].On the root surface, cutin acts as a signaling molecule for the successful establishment of symbiosis.Exogenous application of exogenous lipid monomers dod not hamper colonization attempts in the M. truncatula ram2 mutant, which is deficient in cutin monomer production [153].The specific function of cutin exudation at the early stage of symbiosis is not fully elucidated yet, though it may serve as a substrate for AMF cutinase providing nutrients to support hyphal growth.
Plant-derived SLs cutin and GlcNac are perceived by the AMF, which in return secretes chitin-derived signaling molecules known as Myc-factors [122].They include Myc-LCO and Myc-CO [154], which are recognized by a set of conserved plant receptors [12].Some plants like rice have been shown to preferentially recognize Myc-CO rather than Myc-LCO.Being non-selective, the leguminous M. trunculata exhibited, in the same comparative study, responsiveness toward both of them, thus indicating differential abilities to perceive and to respond downstream to these chitinaceous signaling molecules [155].The biological significance behind the AMF producing various chitinaceous compounds might be to promote the diversity of such signaling molecules for the robustness of the system.On the other hand, the other reason might be for enabling plants to distinguish "friends" among other "fungal foes" present in the rhizosphere [30,156], as is dually performed by a co-receptor CERK1, since it mediates efficiently both immune and symbiotic responses in rice [157].The shared and differential components as well as the related genes between the symbiotic and immune pathways, not within the scope of the present review, have been elegantly reported and updated elsewhere [158,159].
Besides enhancing the Myc-CO release in AMF, SLs could also induce an intense stimulation of other fungal genes.One of these encodes mitochondrial metabolism induction, resulting in its active division with increased NADH and ATP production, prior to the onset of branching [160,161].This is the proposed way in which lipid catabolism, allowing spores to germinate, is activated through host-fungi signaling.SIS1 has been pointed out as a novel SLs activated gene when stunted arbuscules and reduced root length have been developed in M. trunculata roots upon knocking down its expression [162].No further detailed data are so far available about the SL receptor in fungi and the involvement of the SIS1 gene in other plants apart from the one for which the study was carried out [162].Nevertheless, no doubts remain regarding the prominent involvement of SLs in the establishment of AMF symbiosis.As fungal hyphae grow and approach the root, in response to the attractive plant-derived molecules, the reciprocal AMF-released signals engage the so-called CSSP.In this route, the emitted microbial signals are translated into calcium oscillations in root epidermal cells, which are considered as a hallmark of symbiotic signaling, leading to the activation of symbiosis-related genes [159].These genes encode proteins that are directly or indirectly involved in a signal transduction network that is required for the development of intracellular accommodation structures for symbiotic fungi [15].SYMRK encodes a Lucine Rich Repeat (LRR) protein kinase involved in symbiotic signal perception.The genes CASTOR, POLLUX (both reported in rice and soybean), NUP85, and NUP133 are required for the induction of calcium spiking [163,164], and CCAMK acts as a decoder of calcium signaling while CYCLOPS (reported in rice) is downstream of calcium spiking [15,163,164].The encoding-TF CYCLOPS/IPD3 (reported in rice) acts as transducer of the Ca 2+ signals [163].It has long been believed that the intricate process of symbiosis is solely assumed by the CSSP and their related genes.Interestingly, a CSSP-independent pathway has been reported in rice, where D14L, a specific intracellular receptor, plays a central role in establishing symbiosis, as it has been abolished in its early stage in d14L mutants [157].No further updates concerning either the identity of D14L ligand or the involved genes in the novel D14L signaling pathway have been provided so far in rice and in other model plants.
• Physical Contact, nutrient exchange, and associated events in AMF Symbiosis
Following reciprocal recognition in the rhizosphere, physical contact occurs by forming a hyphopodium, which represents the first AM infection event mediated by the RAM1 gene (reported in rice, named OsRAM1 or OsGRAS2) [165,166].RAM1 has been particularly studied in M. truncatula, where ram1 plants mutants are deficient in AM infection due to defect in the differentiation of tip-growing hyphae into a hyphopodium following root contact [167].GlcNAc, whose transport is mediated by OsNOPE1 in rice, as well as SLs and cutin, are required for hyphopodium formation as sources of C. The fungus pursues invasion of cortical cells through a pre-penetration apparatus (PPA) and develops arbuscules in inner cortical cells.Phenotypic analysis of M. truncatula symbiotic mutants shows that DOES NOT MAKE INFECTION2 (DMI2) and DMI3, both genes belonging to CSSP, are essential for PPA induction, and that DMI3 is required for the subset of the induced genes during PPA formation [168,169].Chen et al. [170] have also reported the importance of these genes in rice, without specifying the step of their involvement in the AM symbiosis establishment process.Accordingly, an inappropriate PPA formation with a limited rhizodermal penetration has been obtained in rice when CSSP components genes are mutated [171].The formation of arbuscules from the PPA involves intensive restructuring steps with several associated genes [10], which remain unidentified in cereal and oilseed crops.OsADK1 fits into the set of arbuscule development genes as a newly identified rice kinase, which is specifically induced in the arbusculated cells and required during arbuscule development in AM symbiosis [172].During arbuscule elongation, there is an increased source-to-sink flux redirecting sucrose from leaves towards roots.These hexoses are then transported to the interface between the plant and the fungal membrane at the PAS by sugar transporters such as SWEET1b after being transferred by Monosaccharide Transporter2 (MST2) into the arbuscule cells [173].A tight spatio-temporal control reflecting a finely-tuned activation of these sugar transporters genes in plant cells around hyphae has been reported [174,175].Comparative expression analyses of roots infected or not by AMF showed, besides promoting high sugar content and plant growth, a clear induction of plant sugar transporter OsSWEET3b and GmSWEET6;15 genes in rice and soybean, respectively, in mycorrhizal roots [102,131].Transcriptional characterization of G. max indicated that two SWEET genes (GmSWEET6 and GmSWEET15) were exclusively up-regulated and highly expressed during AM symbiosis [102].Alongside the activation of plant sugar transporters, a number of fungal actors are involved in symbiotic sugar uptake, such as RiMST2 from R. irregularis, which is expressed in fungal intraradical structures [103].SWEET activity may lead to the release of sugars into PAS and thereby fine-tune sugar fluxes and availability at a level that meets the plant demands and allows the C supply to AMF [66].Host plants also provide lipids to fungal species, not just sugars as long believed [7,176].Interestingly, sugar can be transformed in infected cells into fatty acids by FAS encoding a fatty acylsynthase, FatM encoding an acyl-carrier protein-thioesterase, and the earlier reported RAM2, as required for arbuscular development in the fungal colonization process [138,177].While the molecular mechanisms and the encoding genes required to achieve normal arbuscule development, through the carbohydrate and lipid delivery process from plant to fungi, have not been fully unraveled in oilseeds and cereals, numerous studies have resolved this point in other plants.Importantly, impaired arbuscular growth and reduced AM fungal colonization have been reported in L. japonica and M. trunculata mutants for the FatM and RAM2 genes [8,138].The same result was later obtained in rice RAM2 mutants, supporting a conserved nutritional role of RAM2 between monocot and dicot lineages [141,156].Two half-size ABC transporters STR1/STR2, originally identified in rice, have been further added to the AM specific operational unit for plant biosynthesis and transfer to the arbuscules (FatM, RAM2) [178,179].Collectively, these findings indicate that lipids, together with sugars as a major C source in plants, support fungal growth, enabling them in a controlled manner to reach and colonize the host tissue.
• N and P acquisition in AMF symbiosis: an insight into cereal/oilseed transporter genes in AMF symbiosis
When the symbiosis is established, Pi is efficiently absorbed at the extraradical mycelium, where it is assembled to form polyphosphate (polyP) chains which then travel along hyphae to be hydrolyzed back into Pi in arbuscules and translocated to the cortical cells [180].Pi is then transported from the rhizosphere to other plant organs by P transporters belonging to the PHT protein family, which consists of four subfamilies (PHT1-4) [181,182].Several studies have aimed to identify the encoding PHT1 subfamily Pi transporters genes in rice [99], maize [183], barley [100], millet [184], sorghum [185], and soybean [186].While some plant PHT1 transporter genes show decreased transcription after the establishment of colonization, others maintain significantly enhanced gene expression in mycorrhizal roots.Indeed, rice roots exhibit two sets of Pi transporter genes, some of which are expressed (e.g., OsPT11) while others are not (e.g., OsPT1, OsPT2, OsPT3, OsPT6, OsPT9, and OsPT10) upon AM colonization [99].The "coordinated balance" between the two sets of Pi transporter genes, as previously described in maize [79] and in many other symbiotic plants, suggests a switch between the "mycorrhizal pathway" and the "direct pathway", with a dominate fungal Pi uptake that mycorrhizal plants rely on to fulfill their Pi acquisition requirements.
Among the 13 Pi transporter genes previously identified in maize (ZmPHT1;1-13), the ZmPt9 gene is distinguishably expressed in non-colonized roots and up-regulated in both colonized and non-colonized roots under low Pi conditions [79], underlying its role in Pi uptake [79].The expression of the ZmPt9 gene in maize in both colonized and uncolonized mycorrhizal roots suggests its dual involvement in either the direct or indirect mycorrhizal pathways.Moreover, Walder et al. [185] reported in sorghum a set of Pht1 Pi transporter genes, from which SbPT2, SbPT4, SbPT6, and SbPT7 were constitutively expressed in roots, whereas SbPT10 and SbPT11 were only detected in roots colonized by AMF, but not in other tested tissues, suggesting their differential involvement in Pi homeostasis and in the solely symbiotic Pi-uptake at the mycorrhizal interface, respectively [187,188].Additionally, SbPT1 is reported to be up-regulated in response to low soil Pi-availability in non-mycorrhizal roots and down-regulated in response to mycorrhization, displaying typical features related to the direct Pi-pathway of plants [185].By doing so, mycorrhizal plants could satisfy up to 90% of their overall Pi requirements [185] when the mycorrhizal pathway is utilized, and this pathway might be reversed once the Pi nutrient status in soil is re-established [85,189].
AMF can also take up inorganic (nitrate and ammonium) and organic (amino acids and small peptides) N-sources from the soil via the extra-radical mycelia, which are rapidly converted into arginine [190][191][192].The arginine is then translocated in this form from the extraradical mycelia to the intraradical mycelia, together with poly-P, towards the host roots.The released N into the roots is in the form of free C (NH 4 /NH 3 ) [191,192].Many AMT genes are specifically induced by AMF, including GmAMT4.1 in G. max [85], SbAMT3;1 and SbAMT4;1 in S. bicolor [87], and ZmAMT3;1 in Zea mais [193].Given an acidic environment in the PAS, NH 4 + is deprotonated prior to its transport [194].The uncharged NH 3 is then released by AM-induced AMT into the cytoplasm of arbuscule-containing cells.Thus, the remaining protons stay in the PAS for their involvement in the pH gradient and the subsequent H + -dependent transport processes [195].NH 4 + is the preferred form for AMF to be taken up over nitrate, and AMT-mediated ammonium transport across the periarbuscular membrane might be the dominant pathway for Myc-dependent N acquisition [193,196].Therefore, it is possible that a symbiotic pathway for NO 3 − uptake could take a more prevalent place than the mycorrhizal NH 4 + uptake route, at least in some plant species [86].In wetland plants, such as rice, paddy farming requires flooding and upland conditions in which NH 4 + and NO 3 − are easily immobilized, respectively.Consistent with these conditions, putative encoding genes induced in mycorrhizal rice plants for both forms of N supply have been identified [88,139,197].Of the three identified genes, only one is AM-inducible (OsAMT3), which has been exclusively up-regulated, under all N concentrations, in the R. irregularis colonized roots, while the two others (OsAMT1;1 and OsAMT1;3) have been down-regulated under low N supply, assuming a secondary role after activation of the N uptake pathway [197].Recently, Wang et al. [88] have newly identified an OsNPF4,5 encoding gene in rice providing the N uptake via the NO 3 − mycorrhiza acquisition route in drained soils.This additional source of N uptake sheds light on an interesting adaptative strategy that rice, and many other wetlands plant species, could evolve to use to switch between the NO 3 − mycorrhiza pathway and the NH 4 + mycorrhiza pathway when agricultural practices on such wetland crops alternate between partially flooded and drained soils [88].
Expression profiling and functional genomics studies have yielded a comprehensive understanding of the molecular mechanisms governing the symbiotic interaction between AMF and cereal/oilseed crops.These studies provide valuable insights into the genetic and molecular factors that govern mycorrhizal development, nutrient exchange, and the host response to symbiosis-related signals (Table 2).
• AMF symbiosis regulation
As the symbiosis progresses, it comes to an end after a few days to avoid overcolonization, which could be metabolically costly for the plants.Interestingly, plants engage regulatory signaling molecules to fine-tune levels of fungal proliferation within the roots and hence to control the temporal extent of the symbiosis at the mutualistic level.On the fungal side, the arbuscular structure degenerates, allowing the host cells to recover and be recolonized if their nutritional status requires doing so.Sugar, Pi, and N may represent the main signaling molecules that plants rely on to achieve such regulation, where P and N starvation induce arbuscule maintenance, while limitation of C supply to the fungus leads to their collapse [10,66].The molecular symbiosis programs seem more complex, as hormones also take part in the regulation processes controlling AM symbiosis establishment, arbuscular development, and its degeneration [198].SLs, auxin, and abscisic acid generally act as positive regulators, whereas gibberellin, ethylene, and salicylic acid act as repressor of arbuscular development, while little has been explored in AM symbiosis about the roles of brassinosteroid, cytokinin, and jasmonic acid [199][200][201].SLs are the most recent addition to the classically acting plant signaling molecules with a broad range of roles in AMF symbiosis.SL biosynthesis and exudation into the rhizosphere are highly dependent on nutrient availability, with an increase in particular under P-limiting conditions [202][203][204][205].Under P-deficiencies, SL levels are induced into the roots and released around, creating permissive conditions in the early stages to initiate AMF symbiosis [206].Moreover, rice mutants impaired in SL biosynthesis or export display a reduced level of AM colonization, even if the morphology of intraradical fungal structures remain unchanged [60].In line with this, Kobae et al. [207] reported that infection length in rice SL-deficient d17/d10 mutants was decreased compared to controls, although no affected arbuscules were observed.The accompanying phenotypical evaluation revealed that secondary hyphopodia and consequently secondary infections were reduced in d17/d10 mutants, suggesting a sustainable need of SLs for a maximal secondary colonization level.Additionally, different expression profiles of the two key enzymes involved in early steps of SL biosynthesis, CCD7 and CCD8, were also detected during late stages of mycorrhizal colonization [208,209].These data, even if fragmented, indicate a continuous requirement of SLs in both early and late stages of the symbiotic association, highlighting an effective interaction between P starvation signaling pathways and SL signaling in plants [210].Interestingly, the induced expression of SL biosynthetic genes, under P-deficiencies, requires the two GRAS TFs belonging to CSSP, NSP1, and NSP2 [211].It is becoming increasingly evident that SLs not only connect the plant P status with symbiosis but also act as a hub integrating inputs from other hormones.
Gibberellic acid (GA) is another hormone that acts in response to plant Pi status, as Pi shortage reduces expression of GA biosynthesis genes but promotes transcription of DELLA genes, which are themselves repressors of GA signaling [212].GAs have been repeatedly described as repressors of AMF symbiosis, based on the analysis of GA-response mutants and transcriptomic studies.Cui et al. [213] have reported that AM-colonized peanut roots exhibited high GA content with up-regulated DELLA transcripts and the encoding gene of a key enzyme in GA biosynthesis.For instance, in rice, the absence of DELLA proteins, negative regulators of GA signaling, is associated with reduced numbers of arbuscules, whereas their overexpression enhances colonization compared to WT [63,165].These data provide the first evidence that GAs modulate AM colonization via the DELLA proteins, which themselves promote arbuscule formation through the suppression of GA signaling.Interestingly, DELLA proteins can interact with IPD3/CYCLOPS, a component of the CSSP, to activate the expression of RAM1, a GRAS-domain TF required for arbuscule branching and the fine-tuning of plant lipid biosynthesis and their transfer to the fungal arbuscules [166].Besides their role as positive regulators in promoting arbuscule development, DELLAs also regulate arbuscule lifespan through interaction with a MYB TF that promotes the expression of degeneration-associated genes [214].Notably, a cross-talk between SLs and GAs has emerged, as Nakamura et al. [215] have pointed out the interplay between the SL receptor and the GA-signaling repressor, termed D14 and SLR1, respectively.
The roles of SLs are significant, considering the countless interactions that these hormones orchestrate with soil components (such as Pi) as well as the emerged crosstalk with other phytohormones.This raises important questions about the biological relevance of each actor in this intricate association, allowing us to pinpoint a single 'master regulator' that could be used in biotechnological approaches.Moreover, such identification potentially leads to unleashing the full repertoire of both the AMF effects and the plant responses at a high-performance level that the symbiosis could offer.For example, large-scale application of SLs, combined with AMF, could be fully harnessed not only to improve plant fitness but also to enhance plant adaptative responses under environmentally harsh conditions in the ongoing global climate change context.On the other hand, functional genomic approaches have significantly advanced our understanding of the molecular basis of AMF symbiosis in cereal and oilseed crops.Such studies illuminate the mechanistic basis of some central traits in AMF-plant interactions; for example, by identifying potential genes involved in the key stages of symbiosis establishment or those taking place in nutrient uptake.Once identified, synthetic biology methods can handle these candidate genes by increasing their transcription or by inserting new ones from foreign organisms [216].The advantages taken from genomic methods, if combined, will potentially contribute to enhancing yield and other important agronomic traits that cereals/oilseeds are in real need of.Indeed, manipulating fungal traits in favor of boosting AMF colonization performance in flooded soils, which are harsh for AMF growth and germination, could be advantageous for improving NUE for such a significant culture.Further multi-omics studies must be ambitiously generalized so that other agronomically important plant species, such as cereals/oilseeds, could benefit from AMF symbiosis.AMF have been shown to be able to retrieve remote nutrients and make them inaccessible for use by plants, thanks to their important mycelium network and enzymatic molecules.On the molecular level, research has highlighted evidence on the mycorrhizal symbiotic association role in crops such as cereals and oilseeds (Table 3).
Unlocking the
The symbiotic interaction between AMF and cereals appears to be specific.To highlight this specificity between wheat (Triticum aestivum) and mycorrhizae, root-associated AMF were characterized via the sequencing of the large subunit ribosomal DNA (LSU rDNA) gene.The identification of AMF species through DNA sequencing revealed that AMF species belonged to Glomeraceae (mostly), Claroideoglomeraceae, Acaulosporaceae, Gigasporaceae, Archaeosporaceae, and Paraglomeraceae.Furthermore, five symbiotic genes of T. aestivum were strongly expressed: TaCASTOR, TaPOLLUX, TaCCaMK, TaCyclops, and TaSCL26 (NSP2), indicating a preferential symbiotic association.Looking at the mycorrhizae-induced gene expression of the phosphate transporter PhT1 Myc gene, the expression of the specially induced AMF P transporter TaPhT1 Myc was considerably higher at the level of T. aestivum roots associated with AMF from the Glomerals order [217].Further investigations show that wheat-mycorrhizae induced specific genes related to TaPhT most likely extend via segmental as well as tandem duplication events [218].The P transporter family PhT1 plays a key role in the uptake of Pi all the way from soil to the root, is mostly up-regulated in response to P deficiency, has a higher affinity to Pi, and is expressed importantly at the level of roots (rhizodermal cells, outer cortex) [219].
Additionally, the mycorrhizal-specific Pi transporters TaPht-Myc, HvPT8, and BdPT3 are AMF-up-regulated in wheat and barley (H.vulgare), which further emphasizes the important involvement of the PhT Myc gene in P acquisition [220].Furthermore, the expression of genes related to N metabolic pathways was recently recorded in wheat roots inoculated with F. mosseae, which manifested in the expression of these genes at the level of cell walls [221].On the other hand, durum wheat (Triticum durum) inoculated with an ensemble of AMF species dominated by Glomus exhibited a strong expression of the nitrate transporter NRT1.1.The abundance of NRTI.1 transcripts is probably attributed to the reduced availability of NH 4 + [222].The NRT1 family is involved in the regulation of short-distance NO 3 − distribution at the level of root cells.The regulation faculty may be due to the OsNPF7.2protein, located in the NRT1 family, as showcased in the vacuolar membrane of rice [88].Within environments marked by deficiency in Zn, HvZIP genes can be up-regulated.Such a case was observed in H. vulgare inoculated with R. irregularis, where HvZIP13 was strongly up-regulated in response to Zn deficiency [92], pointing out a possible role of AMF in enhancing the barley grains' quality with regard to Zn content.
In response to an Fe-poor environment, the expression of the representative gene (HaFRO1) related to the activity of ferric reductase can be triggered following AMF-root association with an oilseed crop such as sunflower (Helianthus annuus) [223].Additionally, HaIRT1, HaNramp1, and HaZIP1 can also be up-regulated, implying that Fe as well as Zn transporters can concomitantly be implicated in AMF-alleviation effects of Fe shortage.The AMF species ensemble governed by Glomus (R. intraradices, F. mosseae, G. aggregatum, G. etunicatum) mediated-action possibly induced HaZIP1 transporter within H. annuus, thereby enhancing Zn assimilation, which is apparently related to Fe-deficiency alleviation.The effectiveness of the AMF consortium has yet again been highlighted in the work of Sheteiwy et al. [224], where the inoculation with G. clarum, G. mosseae, and Gigaspora margarita of soybean boosted N fixation via nodulation and possibly, though partially, positively regulated genes encoding NO 3 -transporters (e.g., NRT1) [225].Therefore, the metabolism of oilseed crops can also be stimulated, thanks to the mycorrhizal symbiosis.The up-regulation of the G. max-Sucrose Synthase (GmSuSy), for instance, can contribute through triggering alteration at the transcriptional level [224].
Optimizing Plant-Mycorrhizal Associations for Improved Yield
In response to drought stress, the symbiotic association between AMF (F.mosseae) and wheat presented a positive impact on transcription profile (13405 up-regulated genes) of plant growth (plant biomass and spike).This beneficial effect was also related to membrane and cell wall constituents.Differentially expressed genes were detected in lipid and carbohydrate metabolic processes as well as cellulose synthase activity related genes [221] (Table 3).Additionally, growth traits of two durum wheat (Svevo and Etrusco) cultivars grown under water shortage and inoculated with AMF revealed differential molecular behaviors.It was observed that the combination of drought stress and AMF inoculation significantly affected TdSHN1 expression in Svevo and TdDRF1 (genes involved drought stress responses) in both cultivars [226].The same study suggested that this positive impact of AMF-Svevo plants could be linked to the enhancement of water use efficiency (WUE) via the modulation of SHN1 genes.
A recent study revealed an increment of plant biomass under Fe deficiency in mycorrhizal sunflower plants.This improvement was directly correlated with an up-regulation of HaIRT1, HaNramp1, and HaZIP1 mycorrhizal sunflower roots [223].In addition, an overexpression of catalase and peroxidase genes was observed in soybean treated with AMF combined with Bradyrhizobium japonicum, which led to an amelioration of the biomass and grain yield under drought stress.Under the same conditions, a down-regulation was observed in the proline metabolism genes P5CR, P5CS, P5CDH, and PDH [224].
Moving toward Systems Biology for Mycorrhizal Management in Cereal and Oilseed Crops
Systems biology is an emerging field of biology that aims to study how parts fit together to form functional biological systems [227,228].The study and understanding of biological systems require a range of new analytical techniques, measurement technologies, experimental methods, and software tools [229].For example, technologies that allow comprehensive measurements of DNA sequence, gene expression profiles, protein-protein interactions, and -omics data are critical for understanding biological systems.While AMF play a crucial role in agriculture and ecosystems, their genetics are not yet fully understood.The regulation of gene expression is a key factor in understanding the biological mechanisms of an organism [230].This regulation becomes even more crucial and complex in the case of organisms that form an intimate symbiosis with others [231].To date, R. irregularis is the only mycorrhiza with a fully sequenced genome [232].Regardless, the application of AMF inoculants as biofertilizers and biocontrol agents is integral to farming practices.The beneficial effects of AMF include the enhancement of key physiological processes, such as water and nutrient uptake, photosynthesis, and source-sink relationships that promote growth and development.AMF also play a role in regulating osmotic balance and ion homeostasis through the modulation of phytohormone status, gene expression, protein function, and metabolite synthesis in plants [233][234][235].Extensive-omics analysis supported by metabolic data in wheat has shown that AM symbiosis confers greater productivity and of 30 resistance to biotic stress (e.g., X. translucens infection) in plants.The increase in productivity is accompanied by the local and systemic activation of pathways involved in nutrient uptake, primary metabolism, and phytohormone regulation.Defense-related pathways and a new set of genes exclusive to leaves of mycorrhized plants have been identified [236].The systems biology of plant-AMF interactions in response to environmental stimuli opens up new prospects for understanding the regulatory networks of plant tolerance modulated by AMF.
The rapid development of synthetic biology has brought new opportunities for modern agriculture.Synthetic biology can transform crops' metabolic pathways and genetic information and involves the application of microorganisms in agriculture.Consequently, it holds promising prospects in crop breeding, yield increase, and ensuring the safety of agricultural production environments.The use of AMF as microbial fertilizers represents a relatively mature application scenario of synthetic biology in agriculture [237].Currently, there is limited research on the use of mycorrhizae in synthetic biology, whether to enhance crop health or perform other biological functions such as phytoremediation.Synthetic biology can enhance the effect of AMF on plant health by increasing the expression of native host genes through the alteration of transcription rates or by inserting new genes from foreign organisms.Many techniques can be used to achieve this, including the use of CRISPR, a revolutionary technology that allows researchers to modify DNA with greater precision than existing technologies [238].
Concluding Remarks
Cereal and oilseed yield security and quality are intrinsically linked to the imperative of feeding a continuously expanding global population under the specter of climate change.In this context, understanding the contribution of mycorrhizal fungi to the production and nutritional quality of these crops becomes a priority and could drive the sustainable intensification of agriculture.As biofertilizers, they have the potential to counteract excessive fertilization and promote resilience to abiotic and biotic stresses, thereby fostering sustainable agriculture.Our review focuses on how cereals and oilseeds benefit from AMF symbiosis, shedding light on how plants regulate responses and the defensive and signaling modules in their interactions with AMF (a two-way interaction).However, our understanding of the underlying regulatory mechanisms governing cereals' and oilseeds' interaction with AMF remains fragmented, hindering microbial biotechnological applications.Drawing conclusions on the regulating mechanism(s) involved in multiway interactions is more complex than expected, with responses being finely tuned in timing, strength, and genotypes/species.Nevertheless, the contributions of each partner in a mycorrhizal association and the molecular and signaling pathways between plants and fungi are starting to be unraveled with state-of-the-art (Meta) genomic and molecular tools, coupled with high-throughput sequencing and advanced microscopy.It is also crucial to search for key genes/pathways/networks that determine AM responsiveness and affect cereals' and oilseeds' growth, and to begin experimenting with genetic modification of potential AMF to understand their mechanistic basis, how these symbioses function, and the benefits they provide to host plants.Future research should focus on genetic and molecular determinants to fully understand the metabolic pathways and mechanisms involved in AMF-induced performance in cereal and oilseed crops.Coupling systems biology approaches to mathematical modeling with experimental datasets encompassing the dynamics of the responses will be essential for improved predictions.
Over the past few decades, the development of novel genetic engineering and synthetic biology tools has spurred significant advances in the engineering and transfer of bacterial traits.However, there has been limited research on AMF use in synthetic biology, either to enhance crop health, particularly in cereals and oilseeds, or to perform advanced biological functions.Consideration should be given to research efforts aiming to construct resilient and competitive AMF strains isolated from the environment, as well as to more accurately replicate field trials in their experimental setups.Mycorrhizal engineering may of 30 offer the tools to design biotechnological applications addressing cereal and oilseed production and environmental challenges.Potential applications of synthetically modified AMF metabolism include increased P uptake, efficient production of high-value terpenoids (e.g., antibiotic monoterpenes) in host plants, and specific-strain AMF-mediated symbiosis with N-fixing bacteria.The development of 'customized' inocula with improved symbiotic abilities and potentially novel functions will represent key milestones in harnessing and developing more effective and safer (engineered) mycorrhizal diversity to benefit cereals/oilseeds.Through the integration of the aforementioned technologies, it is argued that harnessing AMF as biostimulants could play a crucial role in the sustainable intensification of agriculture in the coming years as the effects of anthropogenic disturbance and climate change continue to increase.
Oilseed Crop Responsiveness in the Presence of AM Colonization 2.1.Recognition and Signaling between AMF and Cereal and Oilseed Crops 2.1.1.Receptor-Mediated Recognition of AMF Signals in Cereals and Oilseeds
Potential: Maximizing Nutrient Uptake and Growth through Mycorrhizal Symbiosis in Cereal and Oilseed Crops 3.1.Unleashing Nutrient Power: Enhancing Nutrient Acquisition and Growth in Cereal and Oilseed Crops 3.1.1.Manipulating Symbiotic Genes for Increased Nutrient Acquisition
Table 1 .
List of some transporters involved in nutrient exchange during AMF-cereal and AMF-oilseed crop symbiosis.
Table 2 .
Genetic regulation of mycorrhizal symbiosis in cereal and oilseed crops.
Table 3 .
Key mechanisms underlying AMF effects on the growth or yields of cereal or oilseed crops. | 15,579.2 | 2024-01-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Environmental Science",
"Biology",
"Engineering"
] |
An Ensemble Classifier for Eukaryotic Protein Subcellular Location Prediction Using Gene Ontology Categories and Amino Acid Hydrophobicity
With the rapid increase of protein sequences in the post-genomic age, it is challenging to develop accurate and automated methods for reliably and quickly predicting their subcellular localizations. Till now, many efforts have been tried, but most of which used only a single algorithm. In this paper, we proposed an ensemble classifier of KNN (k-nearest neighbor) and SVM (support vector machine) algorithms to predict the subcellular localization of eukaryotic proteins based on a voting system. The overall prediction accuracies by the one-versus-one strategy are 78.17%, 89.94% and 75.55% for three benchmark datasets of eukaryotic proteins. The improved prediction accuracies reveal that GO annotations and hydrophobicity of amino acids help to predict subcellular locations of eukaryotic proteins.
Introduction
Researches on subcellular location of proteins are important for elucidating their functions involved in various cellular processes, as well as in understanding some disease mechanisms and developing novel drugs. Since experimental determinations of the localization are time-consuming, tedious and costly, especially for the rapid accumulation of protein sequences, it is highly desirable to develop effective computational methods for accurately and quickly predicting their subcellular attributes.
In the past few years, many computational methods have been developed for this purpose [1,2,3,4]. These methods can be divided into two main categories [5]. Methods in the first category are based on the observation that amino acid compositions of extracellular and intracellular proteins are significantly different [6]. Along this line, many computational approaches based on amino acid composition, dipeptide composition [7] and gapped amino acid pairs [8] were proposed. Meanwhile, to incorporate more sequence information, many other features were incorporated, such as amphiphility of amino acids [9], functional domain composition [10], psi-blast profile [11,12] and so on. Methods in the second category are based on a certain sorting signals [13,14], including signal peptides, chloroplast transit peptides and mitochondrial targeting peptides. For example, Emanuelsson et al. [14] provided detailed instructions for the use of SignalP and ChloroP in prediction of cleavage sites for secretory pathway signal peptides and chloroplast transit peptides. However, the reliability of these methods is highly dependent on protein N-terminal sequence assignments, and the molecular mechanisms related to sorting signals are rather complex and not interpreted clearly.
Not only protein sequence information but also prediction algorithms could affect the accuracy of the subcellular localization prediction. So far, many computational techniques, such as the hidden Markov models (HMM) [15,16], neural network [17], Knearest neighbor (KNN) [18] and support vector machine (SVM) [5,19] were introduced for the prediction of protein subcellular localization. However, most of the current predictors are based on a single theory which could have its own inherent defects, so their predictions are not satisfactory. For example, the number of parameters that need to be evaluated in an HMM is large [20]. The neural network can suffer from multiple local minima [21]. Besides, quite a few ensemble classifiers [7,22,23] for prediction of protein subcellular localizations have been proposed. However, many of the ensemble classifiers were actually engineered only by a single algorithm, such as the fuzzy KNN [7], KNN [22], and Bayesian [23]. Other ensemble classifiers, such as CE-PLoc [24] and the KNN-SVM ensemble classifier proposed by Zhang [25], were engineered by different algorithms, mostly including SVM and KNN. Along this line, an ensemble classifier making use of the classical SVM and KNN algorithms was developed in this article to predict subcellular localization of eukaryotic proteins.
We apply our method to three widely used eukaryotic protein datasets. By the jackknife cross-validation test [26,27,28,29], the ensemble classifier shows high accuracies and may play an important complementary role to existing methods.
Datasets
In order to evaluate the performance of the proposed method and compare it with current methods, we introduced three widely used datasets into this study. The first dataset was constructed by Chou [30]. This dataset (denoted as iLoc8897) consists of 8,897 locative protein sequences (7,766 different proteins), which divided into 22 subcellular locations. Among the 7,766 different eukaryotic proteins, 6,687 belong to one subcellular location, 1,029 to two locations, 48 to three locations, and 2 to four locations. None of the proteins has $25% sequence identity to any other in the same subset. The second benchmark dataset was constructed by Park and Kanehisa [8]. This dataset (denoted as Euk7579) contains 7579 proteins, which are divided into 12 subcellular locations. Proteins in this dataset have the pairwised sequence similarity below 80%. The third dataset was constructed by Shen and Chou [31]. This dataset (denoted as Hum3681) consists of 3,681 locative protein sequences (3,106 different human proteins), which are divided into 14 human subcellular locations. Among the 3,106 different proteins, 2,580 belong to one subcellular location, 480 to two locations, 43 to three locations, and 3 to four locations. None of the proteins has $25% sequence identity to any other in the same subcellular location. The detailed information of the three datasets are listed in Table 1.
Gene Ontology
Gene Ontology (GO) is a major bioinformatics initiative. It meets the need for consistent descriptions of gene products in different databases. Gene Ontology database is established on the three criteria: molecular function, cellular component and biological process. It has been developed to manage the overwhelming mass of current biological data from a computational perspective and become a standard tool to annotate gene products for various databases [32,33]. Accordingly, GO annotation has been being used for diverse sequence-based prediction tasks, such as analyzing the pathogenic gene function with human squamous cell cervical carcinoma [34], mapping molecular responses to xenoestrogens [35], predicting the enzymatic attribute of proteins [36], predicting the transcription factor DNA binding preference [37], and predicting the eukaryotic protein subcellular localization [38]. In particular, the growth of Gene Ontology databases has increased the effectiveness of GO-based features [39]. As a result, Gene Ontology could be used to improve the predictive performance of protein subcellular localization [22,40].
We downloaded all GO data at ftp://ftp.ebi.ac.uk/pub/ databases/GO/goa/UNIPROT/(released on March 15, 2010), and searched the GO terms for all the protein entries in the three datasets. We eliminate those proteins, which have no corresponding GO terms and the number (60, 127 and 4 for the iLoc8897, Euk7579 and Hum3681 datasets) are relatively small compared to the total datasets. We consider this would not have a great influence on its final accuracy. After this step, we got a list of GO terms for each protein entry of the three datasets. For example, the human protein entry ''Q9H400'' in the Hum3681 dataset corresponds to four GO numbers, i.e., GO: 0005886, GO: 0006955, GO: 0016020 and GO: 0016021, while the protein As we know, if we want to describe all possible GO terms for a certain dataset, the simplest way to vector represent a protein was using a binary feature component for a protein. We used value 1 if the corresponding GO number appears and value 0 if it does not appear. For example, the human protein entry ''Q8TDM5'' in the Hum3681 dataset corresponds to seven GO numbers in the GO database, i.e., GO: 0001669, GO: 0005515, GO: 0005886, GO: 0007155, GO: 0016020, GO: 0031225 and GO: 0031410, which corresponded to GO_compress: 0000212, GO_compress: 0001037, GO_compress: 0001203, GO_compress: 0001722, GO_compress: 0002543, GO_compress: 0003360, GO_compress: 0003398 in the GO_compress database. So the 212 th , 1037 th , 1203 rd , 1722 nd , 2543 rd , 3360 th , and 3398 th components of the feature vector were assigned the value 1 and the rest 5553{7~5546 components with the value 0. At last, we transformed the GO terms annotated for each human protein into a 5553-dimension input vector.
Amphiphilic pseudo amino acid composition
In a protein, the hydrophobicity and hydrophilicity of the native amino acids play an important part in its folding, interior packing, catalytic mechanism, as well as its interaction with other molecules in the environment [41]. Therefore, the two indices may be used to effectively reflect the subcellular locations of proteins. Both the hydrophobicity and hydrophilicity are introduced in the concept of AmPseAAC. As we know, the concept of AmPseAAC proposed by Chou [22] was widely used by many researchers in improving the prediction quality for protein subcellular localization [42,43]. Following the concept of AmPseAAC, a protein sample could be descripted by a 20z2l dimensional feature vector, where l is equal to L min {1, where L min is the length of the shortest protein sequence in the dataset. The 20z2l dimensional feature vector for a protein comprises 20 features of the conventional amino acid composition (AAC), and the rest 2l components reflect its sequence-order pattern through the amphiphilic feature. The protein representation is called the ''amphiphilic pseudo amino acid composition'' or ''AmPseAAC'' for short. In order to get more local sequence information, we incorporated 400 dipeptide components to the AmPseAAC. Then the new AmPseAAC is constructed and the dimension is increased to 420z2l, which are 420z2|49~518, 420z2|9~438, and 420z2|50~520 for the iLoc8897, Euk7579 and Hum3681 datasets, respectively. Then we combined the new AmPseAAC and Gene Ontology as the features for protein subcellular localization prediction. As a result, the dimensions of the final input feature vectors are 420z2|49z7871~8389, 420z2|9z6533~6971, and 420z2|50z5553~6073 for the iLoc8897, Euk7579 and Hum3681 datasets.
Feature extraction
Due to the limited numbers of learning examples, learning with a small number of features often leads to a better generalization of machine learning algorithms (Occam's razor) [44]. Additionally, with the increase of the dimension of the feature vector, the computational loads for some machine-learning tools, e.g., Support Vector Machine [45] and Neural Network [46], are seriously affected. As a result, we used the ''fselect.py'' in Libsvm software package to reduce the dimensionality. The fselect.py is a simple python script used F-score to select features. After running the python script, one could get an output file called ''.fscore'', in which each feature was given a score to describe the importance of it and all features were sorted by their scores. Then we chose the top features with the highest contribution scores (Figs. 1, 2, and 3).
The KNN-SVM ensemble classifier
A wide variety of machine learning methods have been proposed for predicting protein subcellular localization in recent years [47,48,49,50], such as Markov chain models [51], neural networks [46], K-Nearest Neighborhood (KNN) [18], and Support Vector Machines (SVM) [52,53]. In these methods, KNN and SVM are two popular classifiers in machine learning task. Previous studies presented that each algorithm has its own advantage and the ensemble classifier of different algorithms is the future direction of protein subcellular localization prediction. So, in this paper we proposed an ensemble classifier of KNN and SVM based on one-versus-one strategy and a voting system (Fig. 4). LIBSVM still has a few tunable parameters which affect the accuracy of the subcellular localization prediction and need to be determined. In this article, ''grid.py'' was used in the iLoc8897 dataset to select the parameter c and the regularization parameter C in LIBSVM [24]. Here, the iLoc8897 dataset was selected for optimization of the parameters of the classification models due to the following reasons: (i) compared to the other datasets, this dataset has the largest number of proteins, so it possesses a distinct statistical significance for training; (ii) sequences in this dataset have relatively low pairwise sequence homology; (iii) this dataset covers enough subcellular locations and was widely adopted for evaluating a new proposed method [30,38].
Prediction of protein subcellular localization is a multi-class classification problem. Here, the class number is equal to 22 for iLoc8897 dataset, 12 for Euk7579 dataset and 14 for Hum3681 dataset, respectively. A simple way to deal with the multi-class classification is to reduce the multi-classification to a series of binary classifications. During this study, we adopted the one-versusone method, i.e., 22|21=2~231, 12|11=2~66, and 14|13=2~91 binary classification tasks were constructed for the iLoc8897, Euk7579 and Hum3681 datasets. Compared to the one-versus-one approach, the one-versus-rest strategy has the shortage that the numbers of positive and negative training data points are not symmetric [54]. For each binary classification, the predictor (KNN or SVM) with the higher output accuracy was selected, and the free parameters, i.e., k for KNN and C and c for LIBSVM, are optimized by the iLoc8897 dataset.
Take the Hum3681 dataset as an example. Following the oneversus-one strategy, 14|13=2~91 binary classification tasks were constructed for this dataset. For each binary classification task, the KNN and SVM are used to predict the attribute of each protein.
As a result, we chose the predictor with the higher output accuracy, where the parameters of KNN and SVM were optimized by the iLoc8897 dataset. Then a score function was generated by the KNN-SVM ensemble classifier formed by fusing the 91 individual binary classifiers through a voting system (see Eqs. 1-3). Each protein was assigned to the subcellular location where S 1 ,S 2 , . . . ,S 14 represent the 14 subcellular locations. The voting score for the protein P belonging to class i is defined as where the d function in Eq. 2 is given by Subsequently, the query protein P was assigned to the class that gives the highest score for Eq. 2 of the 91 binary classifiers. We can assume that there are five subsets and 5|(5{1)=2~10 binary classification tasks are constructed. If the predicted classification results for a query protein P with the ten binary classifiers are R(1)~S 2 , R(2)~S 1 , R(3)~S 4 , R(4)~S 5 , R(5)S 2 , R(6)~S 2 , R(7)~S 5 , R(8)~S 3 , R(9)~S 5 , R(10)~S 4 that is, classifiers 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10 assign protein P to subsets 2, 1, 4, 5, 2, 2, 5, 3, 5 and 4, respectively. As a result, the voting scores for protein P are G 1~1 , G 2~3 , G 3~1 , G 4~2 , G 5~3 . Then protein P was predicted to classes 2 and 5, which both give the highest score of G 2~G5~3 .
Assessment of prediction performances
The prediction quality is examined by the jackknife test currently. Three methods, i.e., the jackknife test, sub-sampling test, and independent dataset test are often used for examining the accuracy of a statistical prediction method. The jackknife test is deemed the most objective and rigorous one [55,56].
The accuracy, the overall accuracy, the ''absolute true'' overall accuracy and Matthew's Correlation Coefficient (MCC) [57] for each subcellular location calculated for assessment of the prediction system are formulated as , if all the subcellular locations of the hth protein are exactly predicted without any overprediction or underprediction 0, otherwise where M is the class number, N is the total number of locative proteins, m(i) and m(j) are the numbers of the locative proteins in classes i and j, p n (i) and p n (j) are the numbers of the correctly predicted locative proteins of class i and class j by binary classifier n. V is the so-called ''absolute true'' overall accuracy. D is the number of total proteins investigated. TP i , FP i , TN i , and FN i are the numbers of true positives, false positives, true negatives, and false negatives in class i by the KNN-SVM ensemble classifier, respectively.
Selection of algorithms and parameters
It is important to point out that the best combination of parameters c and C depends on the dimension Dim of the protein top feature vector. In the present work, we select the parameters c and C when parameter Dim varied from 10 to 50. As seen in Table 2, the highest prediction accuracy was 78.01% at c~0:125, C~2 and Dim~45. While the prediction accuracy obtained by KNN changed as parameter k varied from 1 to 9, and the highest prediction accuracy (74.70%) was obtained at k~5 and Dim~45 for the iLoc8897 dataset. Then the same parameters, i.e., c~0:125, C~2, k~5 and Dim~45 were used for all the three datasets.
Because the Hum3681 dataset has 14 subcellular locations, a total of 14|13=2~91 binary classification tasks were constructed. For each one-versus-one classification task, the algorithm (KNN or SVM), which gave a higher prediction accuracy for Eq. 4, was adopt as the final classifier. For example, the 6 th , 21 st , 26 th , 32 nd , 34 th , 42 nd , 43 rd , 76 th , 82 nd , 84 th and 90 th binary classifiers (11 of 91 classifiers) was based on the KNN method, because the accuracy of KNN method was higher than LIBSVM method by jackknife test, while the rest 91{11~80 binary classifiers were based on LIBSVM, because the accuracy of LIBSVM method was higher than KNN method by jackknife test.
In addition, most of the existing methods for predicting protein subcellular localization are limited to a single location. It is instructive to note that the KNN-SVM ensemble classifier can effectively deal with multiple-location proteins as well, that is, the predicted result for a query protein P may be attributed to two or more subcellular locations. For example, the real subcellular locations of the protein entry ''Q05329'' in iLoc8897 dataset are S 2 ,S 12 ,S 21 f g , and the predicted subcellular locations for ''Q05329'' by the KNN-SVM ensemble classifier are also S 2 ,S 12 ,S 21 f g , because S 2 , S 12 , S 21 give the highest score (G 2~G12~G21~2 0) according to Eq. 2.
Comparison with other methods
In order to check the performance of our method, we made comparisons with the following methods: iLoc-Euk [30], Euk-mPLoc 2.0 [38], Hum-mPLoc 2.0 [31], LOCSVMPSI [58], Complexity-based method [59], and the method proposed by Park and Kanehisa [8] which are also based on the Euk7579 dataset. We also compared our method with the KNN binary classifiers, LIBSVM binary calssifiers, and the KNN-SVM ensemble classifier [25]. The comparison is summarized in Tables 3, 4, 5, and 6.
For the iLoc8897 dataset, the absolute true overall accuracy of the current approach is 75.64%, which is 4.37% higher than the iLoc-Euk method, though the overall accuracy is only 0.89% lower than it. In addition, our method achieves the best performances among the 22 subcellular locations except for the locations of Cytoplasm and Endoplasmic reticulum. Meanwhile, our method also performs better than Euk-mPLoc 2.0 [38] which is also based on the same dataset. For the Euk7579 dataset, the overall accuracy of the current approach is 89.94%, which is also higher than those achieved using the methods listed in Table 4 (from 6.44% to 14.94%). Meanwhile, our method also performs better than some other classifiers such as LOCSVMPSI [58] and complexity-based method [59]. As shown in Table 5, our method also achieves better performances than Hum-mPLoc 2.0. For the Hum3681 dataset, the overall accuracy of the current approach is 75.55%, which is 12.85% higher than the Hum-mPLoc 2.0 method. It is worth noting that all the three datasets (Euk-mPLoc 2.0, iLoc-Euk and Hum-mPLoc 2.0), which also extract sequence features from the Gene Ontology information to represent the query protein, get the comparable accuracies to the present method. This demonstrates that the Gene Ontology information provides a better source of information for the prediction of protein subcellular location. As shown in Table 6, the proposed method, examined by the jackknife test, also performs better than Euk-mPLoc and the KNN-SVM ensemble classifier [25]. For the Euk6181 dataset [60], the overall accuracy of the proposed method is 79.14%, which is 11.74% and 8.64% higher than Euk-mPLoc and the KNN-SVM ensemble classifier respectively [25].
As illustrated by some researchers, protein sequence similarity within the datasets has a significant effect on the prediction performance of protein subcellular location, i.e., accuracies will be overestimated when using high-similarity datasets. To avoid this problem, two low-similarity datasets, i.e., the iLoc8897 dataset and Hum3681 dataset were used to evaluate the performance of our method. The results also show that our method achieves good performances and the prediction accuracies are higher than those achieved using the methods listed in Table 3 and Table 5.
A case study
To evaluate the performance of the proposed method, it was also used to predict the subcellular locations of some proteins used in our laboratory. Take two proteins for example. The first example is fibronectin (FN) [61,62], which is an ''extracell'' protein and abundant in the extracellular matrix and participates in many cellular processes, including osteoblastic differentiation/ mineralization, tissue repair, embryogenesis, cell migration/ adhesion, and blood clotting. The accession number for FN is shown in Table 7. According to our ensemble classifier, this protein was predicted as ''extracell'' protein, which is in accordance with the annotation in Swiss-Prot database. The second is cadherin 11 (CDH 11) [61,62], which is a plasma membrane protein preferentially expressed in osteoblasts. CDH 11 can promote cells to form specialized cell junctions and enhanced crosstalk between adjacent osteocytes. The accession number for CDH 11 is also shown in Table 7. We also predicted it correctly. More examples are list in Table 7. As is shown, 10 of all the 11 proteins are predicted in accordance with the Swiss-Prot annotations by the proposed method. While only 8 of 11 eukaryotic proteins and 2 of 4 human proteins are predicted correctly by iLoc-Euk and Hum-mPLoc2.0 respectively. We also used iLoc-Euk, Hum-mPLoc 2.0 and the proposed method to predict the subcellular locations of some multiplelocation proteins. As can be seen from Table 8, all subcellular locations of the protein Q05329 was correctly identified by the proposed method and iLoc-Euk, but not entirely correctly by Hum-mPLoc 2.0. The second protein P58335 was identified completely correctly by the proposed method, but according to iLoc-Euk and Hum-mPLoc 2.0, it was assigned to only one of its real subcellular locations. The third protein P30622 simultaneously exists at ''Cytoplasm'' and ''Cytoskeleton'' in Swiss-Prot. Both iLoc-Euk and Hum-mPLoc 2.0 only identified one location correctly. Although the proposed method incorrectly predicted P30622 as belonging to ''endosome'', yet it successfully identified two of its subcellular locations.
Conclusions
In this study, a KNN-SVM ensemble classifier by fusing the GO attributes and hydrophobicity features was investigated to predict subcellular location of eukaryotic proteins. Three widely used benchmark datasets were adopted in our work. To improve the prediction quality, the following strategies were applied: (i) representing protein samples by using Gene Ontology could effectively grasp the core features to indicate the subcellular Table 8. Examples to show the predicted results by three predictors on multiple-location proteins. localization, (ii) adopting the one-versus-one strategy and two most popular classifiers in machine learning task, i.e., LIBSVM and KNN to predict protein subcellular location, (iii) capturing the top features and learning with a small number of features might lead to a better generalization of machine learning algorithms (Occam's razor). In summary, the results of the predictions performed by KNN-SVM ensemble classifier indicate that our method is very promising and may play an important complementary role to existing methods. | 5,324.2 | 2012-01-30T00:00:00.000 | [
"Biology",
"Computer Science"
] |
A Novel Enhanced Coverage Optimization Algorithm for Effectively Solving Energy Optimization Problem in WSN
In Wireless Sensor Networks (WSN), Efficient-Energy Coverage (EEC) is one of the important issues for considering the (WSNs) implementation. In this study, we have developed the new algorithm ECO (Enhanced Coverage Optimization) for solving the EEC problem effectively. The proposed algorithm uses three types of major work for effectively solving the problem. One of the three pheromones is the local pheromone, which helps an ant organize its coverage set with fewer sensors. The other two pheromones are global pheromones, one of which is used to optimize the number of required active sensors per Point of Interest (PoI) and the other is used to form a sensor set that has as many senses as an ant has selected the number of active sensors by using the former pheromone. This study also introduces one technique that leads to a more realistic approach to solving the EEC problem that is to utilize the probabilistic sensor detection model. The main goal of ECO is Efficient Coverage on target area with minimum energy consumption and increased network's lifetime.
INTRODUCTION
Wireless Sensor Networks (WSNs) have attracted significant attention over the past few years.A growing list of civil and military applications can employ WSNs for increased effectiveness; especially in hostile and remote areas.Examples include disaster management, border protection, combat field surveillance.In these applications, a large number of sensors are expected, requiring careful architecture and management of the network.
The Wireless Sensor Network (WSN) is a class of wireless networks in which sensor nodes collect process and transmit data acquired from the physical environment to an external base station directly or, if required, uses other wireless sensor nodes to forward data to an external base station (Li et al., 2010).The transmitted data is then presented to the system by the gateway connection.The ideal wireless sensor is networked and scalable, consumes very little power, is smart and software programmable, capable of fast data acquisition, reliable and accurate over the long term, costs less to purchase and install and requires no real maintenance.WSN applications are used to monitor the surrounding environment in a wide range of areas, for example, medical, security, military and agricultural industries.
A Wireless Sensor Network (WSN) is a complex structure consisting of a large number of sensor nodes distributed over a target region.Each sensor has limited computational and storage capacity, a restricted sense and communication radios and a finite power supply.These constraints have led researchers to find better ways of using the sensor nodes looking for a reduction of energy consumption, while maintaining an acceptable coverage threshold.The increasingly cheaper and better technology, along with a wide range of applications, has played an important role in the growing popularity of WSNs.There are primarily four techniques used by efficient power management algorithms: • Long term scheduling, which uses a successive activation of disjoint covers (sets of sensors).
• Short term scheduling, which selectively activates nodes based on their individual battery status • Routing selection, which establishes the shortest path for data transmission.
• Rate allocation, which reduces the amount of data to be coded and transmitted by exploiting its correlation.These techniques, or any combination of them, could be implemented using either a distributed or centralized method.
A sensor node can only be equipped with a limited energy supply in all application scenarios.Energy is consumed during computation and communication among the nodes.The Sensor node lifetime shows a very strong dependency on battery lifetime (Luntovskyy et al., 2010).Selecting the optimum sensors and wireless communications link requires knowledge of the application and problem definition.Battery life, sensor update rates and size are all major design considerations.Examples of low data rate sensors include temperature, humidity and peak strain captured passively.Examples of high data rate sensors include strain, acceleration and vibration.
Many techniques have been proposed to conserve energy and prolong the network's lifetime (Noah et al., 2010;Anastasi et al., 2009).Among them, scheduling methods, which reduce energy consumption by planning the activities of the devices, have been shown to be effective (Lin et al., 2010).These activity scheduling methods need to have devices densely deployed in an interest area.Then, only part, or a subset of the devices accomplishes the sensing task, while the other devices can be scheduled into a sleep state to save energy.By scheduling the devices' activities from active to sleep, or vice versa, this method needs only a subset of the devices for monitoring an area of interest at any time.Therefore, the lifetime of the WSN is prolonged.To achieve a longer lifetime, it is important to find the maximum number of disjoint subsets of devices in the scheduling method.Many scheduling algorithms have been proposed to solve the EEC problem.
RELATED METHODOLOGY
The main objective of the sensor network is to cover the region.The random deployment of sensors to cover a given square-shaped area, where the circles represent the sensing range, each point of the area is monitored by at least one sensor (Ming et al., 2010).According to the sensor network architecture, two assumptions are made: • All the sensor nodes are static once deployed and each one knows its own location which can achieve by using some location system.• Every sensor independently in their sensing activities and schedules itself for or sleep intervals.
For a centralized approach to work effectively, targets must have fixed locations as well as the deployed sensors.This unchangeable structure of the network permits long term scheduling to take place only once in a central computing unit, where information about all sensor's location is gathered just after deployment to solve the EDSC problem.When a solution to the problem is available, it is transmitted to each sensor in the form of an index representing its membership to a cover that is used as the number of battery periods a sensor has to wait before turning itself to active mode.Clearly, the biggest disadvantage of centralized algorithms is that their functionality relies on the network's ability to transmit data from every single node to the central computing unit and vice versa.In probabilistic disc model (Chen et al., 2010) takes into account the uncertainty of the signal detection process and assumes that the detection probability is a continually decreasing function of the distance.Therefore, it is more realistic to presume that a sensor node can detect the occurrence of an event with a certain probability if the distance between the sensor and a PoI is greater than the sensing radius in the Boolean disc model.Heinzelman et al. (2002) developed a cluster based routing scheme called Low Energy adaptive cluster in hierarchy In LEACH the role of the cluster head is periodically transferred among the nodes in the network in order to distribute the energy consumption.The performance of LEACH is based on rounds.Then, a cluster head is elected in each round.In this election, the number of nodes that have not been cluster heads and the percentage of cluster heads are used.Once the cluster head is defined in the setup phase, it establishes a TDMA schedule for the transmissions in its cluster this scheduling allows nodes to switch off their interfaces when they are not going to be employed.The cluster head is the router to the sink and it is also responsible for the data aggregation.As the cluster head controls the sensors located in a close area, the data aggregation performed by this leader permits to remove redundancy.A centralized version of this protocol is LEACH-C (Lindsey and Raghavendra, 2002).This scheme is also based on time rounds which are divided into the setup phase and the steady-phase.In the setup phase, sensors inform the base station about their positions and about their energy level.With this information, the base station decides the structure of clusters and their corresponding cluster heads.Since the base station posses a complete knowledge of the status of the network, the cluster structure resulting from LEACH-C is considered an optimization of the results of LEACH.
The conventional ACO algorithm is based on the behavior of real ants.When a group of ants set out from their nest to search for a food source, they use a special kind of chemical to communicate with each other.The chemical is referred to as the pheromone.Once the ants discover a path to a food source, they deposit pheromone on the path.By sensing pheromone on the ground, ants can follow the path to food source discovered by other ants.As this process continues, most of the ants tend to choose the shortest path to food as there have been a huge amount of pheromones accumulated on this path (Selcuk and Karaboga, 2009).As time goes on, pheromones evaporate, opening up new possibilities and ants cooperate to choose a path with heavily laid pheromones.The ACO algorithm has a parallel architecture and a positive feedback loop mechanism.
PROPOSED ENERGY COVERAGE OPTIMIZATION ALGORITHM
The main objective of the proposed COA algorithm is efficient coverage of target areas with minimum energy consumption and also increased network's lifetime.Here, we are using one technique that is to utilize the probabilistic sensor detection model lead to a more realistic approach to solving the EEC problem.
The proposed system COA uses three types of pheromones to find the solution efficiently.One of the three pheromones is the local pheromone, which helps an ant organize its coverage set with fewer sensors.The other two pheromones are global pheromones, one of which is used to optimize the number of required active sensors per Point of Interest (PoI) and the other is used to form a sensor set that has as many senses as an ant has selected the number of active sensors by using the former pheromone.
The proposed algorithm can be viewed as the following procedures:
Initialization of the algorithm:
In the first stage, we collect position information of the sensors and the PoIs.After loading, we find and store a set of sensors which cover each PoI 'j'.The set is a TxN matrix that consists of the following elements: And also we initialized local pheromone and two global pheromones.This matrix (1) is used to initialize the global pheromone field (for organizing the Active Sensors (AS)) per PoI at the initial stage, for every time slot as follows: where is the residual energy of the sensor 'I' at time slot (ts).
Determining the Number of Active Sensors (NoAS) is an axiomatic fact that the fewer the number of active sensors per PoI, the length is the lifetime of the WSN.Initialize the global pheromone field using a Gaussian function which is based on the following equation: where is the number of sensors covered at PoI and 1 … .This function has a constant σ and the men µ used in Eq. ( 3) is zero at the beginning of the proposed algorithm but increases with the number of times that the first ant of the first colony fails to organize the sensor set, which met the condition is, PoI in the region is mostly covered by at least some sensors.The repeated failure of this ant under the initialized means that the current distribution with the mean is µ insufficient to determine the number of active sensors at PoI.Thus, there is a higher chance to organize the efficient set of the sensors for all of the ants.
Selection:
The Selection process is based on Roulette wheel selection.In Roulette wheel selection, each individual is selected with a probability proportional to its fitness value.Thus, weak solutions are eliminated and strong solutions are considered to form the next iteration.
To find a covering sensor set at PoI 'j', ant 'k' first determines the number of active sensors, , using the global pheromone field .Then, ant chooses with a probability determined in accordance with the intensity of pheromone.The selection probability of the for ant 'k' is as follows: where, is the number of sensors covered at PoI.Eventually, ant k determines through roulette wheel selection (or the fitness proportionate selection) using the above probabilities.
The selection probability of the sensor , for ant 'k', when ant 'k' plays the roulette wheel selection is as follows: where, allowed = (j)-{tab }, or this is the set of remainder sensors, except that the first one is selected among sensors, except that the first one is selected among sensors, is the local pheromone, which has effects in the third loop, i.e., while ant travels alone.In contrast, the global pheromone fields, and, which have influence in one time slot, i.e., the time it takes to complete the travel of the colonies.
Local pheromone updating:
After finishing ant k's selection, this pheromone field is initialized and is then used by the ant k+1.This field is updated whenever ant 'k' decides on the sensors that cover the PoI 'j'.The selected sensor gets the value every time it is selected by ant 'k', as follows: , 1 , ∆ . (6) The local pheromone . is updated at the end of ant k's travel for the PoI 'j'.Thus, this equation describes the policy of the pheromone update at t+1 which is the point when ant 'k' has organized the subset to cover the PoI 'j' if 't' is the point of the previous update.∆ . is the amount of pheromone trail added on the element of vector for sensor 'i' chosen by ant 'k' at the PoI 'j' and where ∆ . is the amount of the updated pheromone trail and is given as follows: Rank list maintenance: Ant organizes a subset that covers the PoI 'i' through the roulette method.The subset is generated and stored as the set , which is selected by ant k and is the union set of .Each set that is made by 'M' ants is saved on the Rank List cell.The tour of a colony ends here.When the colony finished .Cost from the Eq. ( 7) with the M sets collected by the previous colonies, until the (cn-1) th iteration (or colony) and the new M sets are made by the current cn th colony.Then, we have to arrange the total 2M sets in increasing numerical order.Among them, we cut M sets in order and store them in the Rank List again.
Global pheromones updating:
After the tour of a colony ends again, the global pheromone trail amount , and , are updated, using the cost of the sets in RankList (M) if the configuration of the RankList (M) is completed.
The global pheromone trail amount is updated according to the following formula: where, ρ is the pheromone decay parameter.The pheromone is evaporated as time goes on.As mentioned above, the global pheromone is updated at the end of travel of a colony that has M ants.Thus, this equation describes the policy of the pheromone update at t+M if t is the point of the previous update and where ∆ , is the added pheromone trail amount at t+M, given as follows: is determined in accordance with the ranking of rank list is as follows: (11) Calculation of C-Best: If the number of colonies that accomplish the task is more than Mc, the current time slot is finished and then at that time is the optimal cover set of sensors.After that, a new time slot begins and the global pheromone fields and the ranking list are initialized at the beginning.The IACO algorithm finds the optimal cover set of sensors in every time slot, recursively.However, this iteration process goes on until, each PoI must be covered by at least some sensors (i.e., There is no longer satisfied by any of the PoIs, or the network fails to cover any PoIs).The final set C cell is the group of and the final solution of the EEC problem.The number of the time slot, TS, also becomes the lifetime of the WSN.
SIMULATION RESULTS AND PERFORMANCE EVALUATION
The performance evaluation is carried out as a simulation study using NS2.We use the following metrics in evaluating the performance of the different multicast routing protocols.The packet delivery ratio is computed as the r a t i o of total number of unique packets received by the receivers to the total number of packets transmitted by all sources times the number of receivers.Routing overhead is the ratio between the number of control bytes transmitted to the number of data bytes received.
The simulation results of our proposed ECO algorithm are compared to other leading algorithm ACO -C).In these simulations, we use synthetic MANET scenarios, in which we subject the optimization algorithm to a wide range of mobility, traffic load, and multicast group characteristics (i.e., group size and number of sources).
Figure 2 shows the packet delivery ratio as a function of traffic load.It is observed that all optimization algorithm are affected by the increase in network traffic.For the traffic loads considered, ECO algorithm still outperforms ACO and LEACH-C in terms of delivery ratios.T h e performance of the Proposed ECO algorithm is Figure 3 depicts the control overhead per data byte delivered as a function of traffic load.It can be seen that Proposed algorithm control overhead remains almost constant with increasing load.The high routing overhead seems to suggest that ECO algorithm can be quite expensive at higher traffic loads and, hence, not scalable with increased traffic loads.
Figure 4 shows the packet delivery ratio as a function of the number of senders.Note that both the Proposed optimization algorithm and ACO packet delivery ratios remain fairly constant with the number of senders; thus, they do not suffer from increased contention except at a higher number of sources, where a slight drop off can be observed and is attributed to data packet loss due to collisions.
Figure 5 depicts how control overhead varies with the number of traffic sources.
CONCLUSION
In this study, a novel ECO algorithm is optimized to solve the EEC problem.The proposed algorithm has new characteristics that are different from conventional ACO algorithms.It uses three types of pheromones to solve the EEC problem efficiently.One is the local pheromone, which helps an ant organize a coverage set with fewer sensors; the others are the global pheromones.One global pheromone is used to optimize the number of required active sensors per PoI and the second global pheromone is used to form a sensor set that has as many sensors as an ant has selected the number of active sensors by using the former pheromone.It also utilizes a reduced number of the user's parameters.To achieve this, it introduced the heterogeneous WSN, which is made by the random selection of the parameters of the probabilistic sensor detection model.So the simulation result shows that the ECO algorithm used to decrease the energy consumption and also increase the network's lifetime.
Fig. 1 :
Fig. 1: Flowchart of the proposed algorithm Initialization of the algorithm: Collect the position information of sensors and PoIs.And also all pheromone values and parameters are initialized • Initialization of ants: Initialized the number of ants M, which compose a colony and the also initialized the number of colonies , which is the repeated count within a time slot • Selection: Select the number of active sensors p and also select the active sensors p using roulette wheel selection • Local pheromone updating: Local pheromone is updated at the end of each ant k's travel for PoI j. • Rank list: Ant k organizes a subset is stored as the set .Each set that is made by M ants is saved on the Rank List cell • Global pheromones updating: Global pheromone trail amount and are updated, if Rank list M is completed • Find : The set with minimum cost among M is individually saved at .To update , repeat the same process times.The flowchart of this algorithm is given by Fig. 1.These procedures are described in details below.
Fig. 2 :Fig. 5 :
Fig. 2: Packet delivery ratio as a function of traffic load ACO and LEACH-C as traffic load increases on account of the great number of redundant transmissions. | 4,621.4 | 2014-01-27T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A comparison of next-generation turbulence profiling instruments at Paranal
A six-night optical turbulence monitoring campaign has been carried at Cerro Paranal observatory in February and March, 2023 to facilitate the development and characterisation of two novel atmospheric site monitoring instruments - the ring-image next generation scintillation sensor (RINGSS) and 24-hour Shack Hartmann image motion monitor (24hSHIMM) in the context of providing optical turbulence monitoring support for upcoming 20-40m telescopes. Alongside these two instruments, the well-characterised Stereo-SCIDAR and 2016-MASS-DIMM were operated throughout the campaign to provide data for comparison. All instruments obtain estimates of optical turbulence profiles through statistical analysis of intensity and wavefront angle-of-arrival fluctuations from observations of stars. Contemporaneous measurements of the integrated turbulence parameters are compared and the ratios, bias, unbiased root mean square error and correlation of results from each instrument assessed. Strong agreement was observed in measurements of seeing, free atmosphere seeing and coherence time. Less correlation is seen for isoplanatic angle, although the median values agree well. Median turbulence parameters are further compared against long-term monitoring data from Paranal instruments. Profiles from the three small-telescope instruments are compared with the 100-layer profile from the stereo-SCIDAR. It is found that the RINGSS and SHIMM offer improved accuracy in characterisation of the vertical optical turbulence profile over the MASS-DIMM. Finally, the first results of continuous optical turbulence monitoring at Paranal are presented which show a strong diurnal variation and predictable trend in the seeing. A value of 2.65"is found for the median daytime seeing.
INTRODUCTION
Atmospheric optical turbulence (OT) induces both phase distortion and amplitude modulation of light that propagates through it, leading to a severe reduction in achievable image quality from ground-based optical instruments.Large astronomical telescopes typically employ adaptive optics (AO) systems to compensate for the wavefront phase distortion, however there is a need for external monitoring of OT during the design, validation and commissioning of such systems.Additionally, knowledge of the vertical distribution of optical turbulence will be crucial for predicting and verifying the performance of multi conjugate adaptive optics (MCAO) systems planned for 20-40m ELT-class telescopes (Costille & Fusco 2011;Tokovinin 2010).These systems will therefore demand instruments that measure both "integrated" parameters relevant to AO and the vertical distribution of optical turbulence.Turbulence monitoring instruments are today installed at many of the largest astronomical observatories, provid-★ E-mail<EMAIL_ADDRESS>ing real-time measurements of turbulence conditions, ensuring that observational sensitivity requirements are met (Milli et al. 2019), and providing long-term site monitoring data which is highly desirable in the development of new optical instruments.Turbulence monitoring is also seen as increasingly important in improving the accuracy of meso-scale turbulence forecasting models (Masciadri et al. 2020), which offer further gains in efficiency for observation scheduling through the process of auto-regression (Masciadri et al. 2023) and will be highly beneficial to the operation of ELT-class instruments.The current standard, small-telescope OT monitoring instruments -the Multi Aperture Scintillation Sensor (MASS) and Differential Image Motion Monitor (DIMM) -are limited by the use of outdated CCD cameras, custom-manufactured equipment and, in the case of the MASS, a noted discrepancy in measurements of OT profiles compared to the high-resolution Stereo-Scintillation Detection and Ranging (S-SCIDAR) technique (Masciadri et al. 2014;Lombardi & Sarazin 2016).There is therefore significant motivation to develop new instruments based on modern technologies for deployment alongside ELTs.
The minimum requirement for such instruments is firstly accurate measurement of the astronomical seeing 0 .This parameter is directly related to the integrated turbulence strength of the atmosphere and represents the angular size of the seeing-limited (long-exposure) point spread function (PSF) for astronomical observations.The free atmosphere seeing, 0, is a measure of the seeing above an altitude of 500m (Lawrence et al. 2004) and enables a comparison of seeing decoupled from highly localised turbulence in the ground layer.Additional integrated turbulence parameters of interest include the coherence time, 0 , and isoplanatic angle, 0 (Roddier 1981).These are relevant to the operation of AO systems, representing respectively an upper limit on the time taken to measure and correct wavefront distortions and an upper limit of the achievable angular correction.MCAO and laser tomographic adaptive optics (LTAO) systems planned for ELT-instruments will also require knowledge of the optical turbulence profile, as do forecasting models, in order to provide meaningful validation of techniques.Accurate measurement of the optical turbulence profile is therefore also highly desirable.
Multi-instrument campaigns have been hosted a number of times at the European Southern Observatory (ESO) Paranal site, including for example Dali Ali et al. (2010) and Osborn et al. (2018).This work details the results from the most recent campaign at Paranal, in which three turbulence profiling instruments based on portable telescopes: the 24-hour Shack-Hartmann image motion monitor (24hSHIMM) (Griffiths et al. 2023), full aperture scintillation sensor (FASS) (Guesalaga et al. 2021) and ring-image next generation scintillation sensor (RINGSS) (Tokovinin 2021) were compared with permanently installed OT profiling instruments at the site.The primary motivation being to facilitate the development and characterisation of these next-generation instruments against existing techniques.The three instruments were co-located on the northernmost part of the observatory for 6 nights starting on the 27th of February, with the final night of observation on the 5th of March 2023.The ESO Multi Aperture Scintillation Sensor -Differential Image Motion Monitor (MASS-DIMM) (Chiozzi et al. 2016) was operating throughout all nights of observation whereas the stereo-SCIDAR (Osborn & Sarazin 2018) was operated from the 28th to the 5th only.As a part of the VLT Atmospheric Site Monitoring (ASM) package, measurements of local meteorological parameters were available for additional analysis.This work will outline the theoretical operating principle behind each instrument used in the campaign and present the major results from the campaign with discussion.The generalised FASS instrument is still under development and so its results have been excluded from this work.The measurements of the 24hSHIMM and RINGSS will be compared directly with the permanent instrumentation -the DIMM, MASS-DIMM and the S-SCIDAR -both on measurements of integrated parameters and on OT profiles using high-resolution vertical 2 profiles obtained from the S-SCIDAR.
TURBULENCE PROFILING INSTRUMENTS
The concepts and capabilities of each of the instruments used during the campaign are briefly summarised below.For this campaign, the other ESO turbulence profiling instruments: the robotic Slope Detection and Ranging (SLODAR) instrument and the adaptive optics facility on UT-4, were not operational and so are omitted.
Stereo-SCIDAR
S-SCIDAR, which is described in detail in Shepherd et al. (2014), is a triangulation technique that exploits observations of binary stars with a similar magnitude, requiring a telescope larger than 1-m diameter and low-noise camera due to the relative faintness of such targets, to measure the vertical distribution of OT in the atmosphere.The S-SCIDAR projects the pupil image from each star onto a separate CCD detector using a prism which yields sensitivity advantages over the typical SCIDAR implementation where the pupil images are overlapped on a single camera (Fuchs et al. 1998).The cross covariance of the spatial intensity fluctuations in the two pupil images is analysed to extract a high-resolution optical turbulence 2 (ℎ) dℎ profile comprised of 100 layers at 250m intervals.Additionally, by analysing the temporal evolution of the cross-covariance responses, it is possible to extract the wind velocity and direction of individual turbulent layers which enables estimation of the optical turbulence coherence time.The S-SCIDAR system at Paranal is mounted on one of the 1.8m auxiliary telescopes and has been extensively tested and validated against existing instrumentation at the site (Osborn et al. 2018).The S-SCIDAR data from this experiment has been processed using the latest corrections for finite spatial sampling described by Butterley et al. (2020a) which also includes subtraction of localised turbulence within the dome.
DIMM
The DIMM (Sarazin & Roddier 1990) consists of a small telescope with a CCD camera and a pupil-plane mask of two small circular apertures.Using a prism, the beams from the two apertures are imaged onto a detector and spatially separated.The seeing is measured by analysing the variance in differential position of the two focal spots (Tokovinin 2002).The DIMM is a simple, portable OT monitor and provides measurements of the seeing at one minute intervals.The Very Large Telescope (VLT) DIMM at Paranal is configured in a combined MASS-DIMM system mounted on a 28-cm Celestron C11 telescope and was installed as a part of the 2016 ASM upgrade on a 7-m tower.Limitations of the instrument include insensitivity to the bias introduced by optical propagation and only providing measurements of the seeing.
MASS
The MASS (Kornilov et al. 2003) is similarly based around a smalltelescope and measures the normalised intensity fluctuations resulting from propagation though turbulence, commonly referred to as the scintillation index, in 4 concentric apertures.Using the theory described by Tokovinin et al. (2003), weighting functions are generated for the 10 (4 normal and 6 differential) scintillation indices at vertical heights of 0, 0.5, 1, 2, 4, 8, 16 km and an inversion algorithm is used to reconstruct the 2 (ℎ) dℎ of each layer.The VLT MASS is combined in a MASS-DIMM configuration (Kornilov et al. 2007).As the MASS relies solely on measurements of scintillation, it is insensitive to ground-layer turbulence which can be accounted for using simultaneous measurements from the DIMM.The techniques described by Kornilov (2011) allow for estimation of the OT coherence time by measurement of the atmospheric second moment of wind and combination with the DIMM data.
RINGSS
RINGSS is a solid-state turbulence profiler developed to replace the technically obsolete MASS instruments (Tokovinin 2021).It uses a 5-inch Celestron telescope where image of a bright single star is optically transformed into a ring.This is achieved by combination of spherical aberration and defocus in the focal-reducer lens.The pixel scale is 1.57arcsec and the ring radius is 11 pixels.Cubes of 2000 ring images of 48×48 pixel format and 1 ms exposure time are recorded by a CMOS camera.Image processing consists in centering the rings and computing 20 harmonics of intensity variation along the ring (in the angular coordinate).Variances of these harmonics, averaged over 10 image cubes, are related to the turbulence profile by means of weighting functions in the same way as in MASS.RINGSS delivers turbulence integrals in eight layers at 0, 0.25, 0.5, 1... 16 km heights.The results refer to zenith; they are corrected for the finite exposure time bias and partially corrected for deviations from the weak-scintillation regime (saturation).The atmospheric time constant is determined by the method of Kornilov (2011).The instrument operates robotically.Its control provides for selection and change of targets, pointing and centering, and closed-loop focus control.
Scintillation signals in RINGSS are sensitive to the ground-layer turbulence because the image is not focused (analogue of a generalized SCIDAR).Alternative estimation of seeing is made using radial distortions of the rings, like in a DIMM.This "sector" seeing agrees reasonably well with the scintillation-based seeing: the ratio of their mean values is 1.038, the correlation coefficient is 0.97, and the rms scatter around the regression line is 0.11 ′′ .Under excellent conditions, the sector seeing is systematically larger; this bias appears when turbulence in the ground layer is less than 2 × 10 −13 m 1/3 and is absent otherwise.We attribute this effect to imperfect focusing of the ring in the radial direction, analogous to the similar bias in a defocused DIMM.In the following analysis, we use only the scintillation-based seeing measured by RINGSS, while the supplementary data provide the alternative "sector" seeing values as well.
24hSHIMM
The 24hSHIMM (Griffiths et al. 2023) is based around a Shack-Hartmann wavefront sensor (SHWFS) and portable 11-inch telescope design.It observes single, bright stars and measures both the intensity and wavefront angle-of-arrival (AoA) fluctuations in each of the SHWFS focal spots.The spatial statistics of the scintillation are compared with weighting functions (Robert et al. 2006) and a non-negative least squares algorithm is used to reconstruct a low-resolution 2 (ℎ) dℎ profile.The 24hSHIMM is not negativelyconjugated, therefore a scintillation-based reconstruction is insensitive to the ground layer and integrated turbulence strength measurements from SHWFS AoA fluctuations are used to overcome this limitation.The 24hSHIMM is designed to operate for 24-hours a day, typically through use of an InGaAs camera operating in the short-wave infrared to reduce sky background light and minimise the effects of strong turbulence.The 24hSHIMM utilises the FADE method (Tokovinin et al. 2008) to estimate the coherence time of the atmospheric turbulence.This method of direct measurement of coherence time is an improvement on the previous implementation using wind-speed profiles from the ERA5 ECMWF forecast (Hersbach et al. 2020) which are limited by low spatial and temporal resolution.Another notable change from the original implementation of the 24hSHIMM is that in this work, measurements are obtained by a CMOS camera and a 600nm longpass filter which introduces additional constraints on performance.
Campaign details
The location of each instrument on the Paranal observatory platform is shown in figure 1.The 24hSHIMM and RINGSS were mounted on concrete pillars adjacent to the 1998 DIMM tower within 2m of one-another.The FASS was mounted on a tripod slightly further away, between the old-DIMM tower and SLODAR crate.The 24hSHIMM was mounted approximately 2m off of the ground, the RINGSS and FASS were at about 1.5m.Wind breaks were set up along the Northern fence next to the instruments.
The local environments for the S-SCIDAR and MASS-DIMM are therefore significantly different; they are both much further away from any large buildings and more elevated from the ground.The MASS-DIMM is on a 7 m tower and the S-SCIDAR was mounted on VLTI auxiliary telescope two; the alt-az altitude axis of which is 5-m above surface (Koehler & Flebus 2000).We therefore expect poorer agreement in the seeing between these instruments and the monitors located near the VLT Survey Telescope (VST), as local turbulence conditions are likely to differ significantly.
The list of targets for the RINGSS was shared at the beginning of the experiment and efforts were made to synchronise target stars where possible between the visiting turbulence monitors.The MASS-DIMM and S-SCIDAR however were using separate target lists.
RESULTS
The overall results for this campaign are laid out below.This includes both direct comparison of integrated parameter measurements between the different instruments, and a comparison of optical turbulence profiles with the high-resolution S-SCIDAR.A focus is primarily made on comparison of the developmental instruments 24hSHIMM, RINGSS with the well-characterised and permanently installed S-SCIDAR and MASS-DIMM.However all instruments have been compared where appropriate.The comparison between 24hSHIMM and RINGSS is of interest as the two instruments were co-located, observing similar targets and so are much more likely to agree.The agreement of the S-SCIDAR and MASS-DIMM is also of interest to compare to long-term monitoring results and previous studies.
To generate comparison plots, for the instrument on the x-axis, each measurement has been directly plotted against the nearest mea-surement from the instrument on the y-axis within a maximum time difference of two minutes.If a corresponding measurement could not be found within two minutes, the data point has been excluded from the plot to minimise the effects of temporal evolution of the turbulence on the comparison.This two minute interval was chosen to match the integration time used by the S-SCIDAR as it was the longest of all the instruments.As the algorithm finds the nearest measurement within the search window and the other instruments all have a cadence of a minute or less, reducing the interval to one minute, for example, was observed to produce almost identical statistical comparison parameters.In each comparison plot, a white dashed line represents the line of perfect agreement between the instruments, and the Pearson correlation coefficient, , bias, B, unbiased root mean square error, RMSE, and mean ratio, MR, of each data set is reported in the top-left of the graph.Mathematical definitions of the latter three parameters may be found in appendix A. These comparison parameters are additionally summarised for each figure in table 2. The colour gradient indicates the density of measurements at each point in the graph with black the lowest and pale yellow the highest.The median values from these findings will also be compared where useful to results from long-term studies on seeing conditions at Paranal with Butterley (2021) reporting the latest S-SCIDAR results and Otarola (2021) the results from the MASS and DIMM.These results can be found in table 1.All integrated turbulence parameter measurements displayed below are derived at zenith and a wavelength of 500 nm.All turbulence profiles are given as a function of vertical height.Finally, the distribution and temporal sequences of 2 (ℎ) dℎ profiles measured by the instruments will be directly compared with the S-SCIDAR through a binning process to investigate accuracy of OT profile characterisation, and the first results from the 24hSHIMM of 24-hour continuous monitoring of OT at Paranal are presented in full.
Seeing
The astronomical seeing, 0 , describes the angular full-width-athalf-maximum (FWHM), typically measured in units of arcseconds, of the seeing-limited point spread function for long-exposure imaging through optical turbulence.It can be calculated using the Fried parameter 0 (Fried 1966), where = 2/ is the wavenumber, is the wavelength of the light, the zenith angle of observation in radians, ℎ the altitude of a turbulent layer in metres, 2 (ℎ) the refractive index structure constant, given in units of m −2/3 .The relationship between the Fried parameter and the seeing is then given by 0 = 0.98 (2) Accurate measurement of the astronomical seeing is the most fundamental requirement of an optical turbulence monitor as it quantifies the integrated turbulence strength of the atmosphere and directly relates this to the degree of image distortion.Seeing is dynamic, can change rapidly and is highly dependant on location and pointing direction (Tokovinin 2023) which leads to discrepancies between instruments, even for well-synchronised measurements.Median seeing measurements in table 1 indicate that the two instruments located in the northern end of the site, near to the VST and installed at a lower height above ground, are measuring substantially stronger seeing that the MASS-DIMM and S-SCIDAR.This is most likely due to local turbulence effects.There is however a very strong agreement between the DIMM and S-SCIDAR measurements, and a mean ratio close to 1, despite their separation on the site -but noting their similar height above the ground and isolated locations this is not surprising.
It is known that the local seeing at the 1998-DIMM tower is slightly stronger than the current 2016-MASS-DIMM.The median seeing calculated from several years of measurements with the 1998-DIMM between 2010-01-01 and 2015-05-22 was found to be 0.98 ′′ compared to the 2016-DIMM long term seeing of 0.71 ′′ .This supports a location-based argument for some of the discrepancy between the visiting and the ESO instruments.Previous campaigns using the Generalised Seeing Monitor at the same location have found seeing values of 0.88 ′′ (Martin et al. 2000) and 1.07 ′′ (Dali Ali et al. 2010).Additionally, high-resolution profiling of the surface layer carried out by Butterley et al. (2020b) using the surface-layer SLODAR identifies an exponentially decaying turbulence strength with altitudehence we also expect the higher elevation of the MASS-DIMM and S-SCIDAR to result in lower seeing.
Individual comparisons of seeing measured by each instrument are displayed in figure 2. It is extremely encouraging that all seeing measurements display strong correlation with the minimum of = 0.70 for the RINGSS compared with the S-SCIDAR.As expected, due to co-location and overlapping targets, the 24hSHIMM and RINGSS display a very strong correlation of 0.83, however there is a significant bias between the two despite their proximity.A number of factors may contribute to this, including the RINGSS corrections for finite exposure time and partial saturation of scintillation -conditions which would lead to underestimates of fast-moving and high altitude turbulent layers on the 24hSHIMM -there is a also a small height offset between the two with the RINGSS being closer to the ground which could lead to slightly stronger turbulence above the telescope pupil.The correlation between the DIMM and S-SCIDAR is equally strong but with far less bias -the results are also consistent with the long term monitoring as seen in table 1.
Free atmosphere seeing
The free atmosphere seeing, 0, is calculated as the integrated seeing of all turbulent layers with an altitude of 500 m or greater for the MASS, RINGSS and S-SCIDAR.The 24hSHIMM is limited by a large sub-aperture size of 4.7 cm and cannot sample the highest frequency scintillation fluctuations produced by low-altitude turbulence.This is due to height scaling of the characteristic size of scintillation speckles -given by the radius of the first Fresnel zone, ≈ √ .It therefore lacks the sensitivity required to reconstruct a layer at 500 m, so a direct comparison with the other instruments is not possible and it has been excluded.Figure 3 details the measurements obtained with the three other instruments.
Isoplanatic angle
The isoplanatic angle is defined by Roddier (1981) as This quantity is of particular interest for design and operation of AO systems as it represents the separation angle between a guide star and target which will result in 1 rad 2 RMS wavefront error for phase corrections.It is particularly of interest when considering target availability in single conjugate adaptive optics (SCAO) and in calculation of AO error budgets.
Table 1.Median values of parameters obtained during this campaign, marked in the columns as '2023', from all instruments are compared with long-term site monitoring results of Otarola (2021); Butterley (2021) with the column labels 'long-term'.There are some blank entries which correspond to unavailable data -either because the instrument cannot measure the parameter or there is no source for long-term data.The median values for the 24hSHIMM are calculated excluding data taken during the daytime.
N Profiles
0 ( ′′ ) 0, ( ′′ ) 0 ( ′′ ) 0 (ms) Figure 4 displays the comparisons of isoplanatic angle measured by all instruments.Unlike measurements of the seeing, it is observed that there is less correlation between all instruments.However, the variation of isoplanatic angle during the campaign was small.The strongest correlation, 0.54, is found between 24hSHIMM and RINGSS which observed same targets, while other profilers sampled different turbulent volumes.The ℎ 5/3 scaling in Eq. 3 implies that this parameter is highly sensitive to turbulence in the upper atmosphere.Therefore an accurate characterisation will require sensitivity
Coherence time
Knowledge of the coherence time is essential for AO as it defines the minimum bandwidth of the system.The optical turbulence coherence time is typically on the scale of a few ms.It is related to the wind speed profile and turbulence strength in the following way (Roddier 1981), where 5/3 is the weighted mean of the wind speed raised to the power of 5/3, The instruments in this study employ a variety of strategies to measure the coherence time.The S-SCIDAR analyses the spatio-temporal cross-correlations of the scintillation measured in the pupil.Peaks that match atmospheric layers translate across the auto-covariance map with each successive time offset due to translation of the turbulent layers with wind.The direction and speed of each of the layers is recorded and the mean wind speed calculated from Eq. 5.The S-SCIDAR is only able to directly estimate the wind speed of the strongest layers.Weak layers with no detected wind speed are assigned a value through interpolation of the measured wind speed profile.The 24hSHIMM takes a different approach, utilising the FADE method (Tokovinin et al. 2008), which involves fitting response functions, determined by layer wind speeds and 2 (ℎ) dℎ , to the measured temporal structure function of the Zernike defocus coefficient of the atmospheric wavefront distortions.The 24hSHIMM analysis differs slightly from the FADE instrument as wavefronts are reconstructed by the Shack-Hartmann yielding direct measurements of the Zernike defocus term, and only layer wind speeds need to be fitted.As the 24hSHIMM sampling rate was limited to 100Hz for this experiment, it was necessary to exclude 362 measurements that had a 5/3 > 15 ms −1 as the defocus structure function curve could not be sampled with a sufficient temporal resolution to fit a wind speed profile.The MASS-DIMM and RINGSS utilise the method described in Kornilov (2011) of including a wind shear component in the weighting functions, continuous exposures without gaps, and a fitting process to estimate the second moment of the wind 2 with the approximation of 2 ≈ 1.1 5/3 found by Kellerer & Tokovinin (2007) enabling an estimate of the coherence time.
Figure 5 displays comparisons of coherence time measurements for the four instruments.The RINGSS and MASS use the same method of calculating coherence time and agree strongly with little bias.The two instruments also agree well with the S-SCIDAR, again with little bias.The 24hSHIMM shows good correlation with all instruments too.The bias however is small but positive with respect to the S-SCIDAR and MASS-DIMM.Lower elevation and imaging through more of the surface layer should lead to a negative bias, suggesting that the instrument may be overestimating coherence time which could be a result of the low frame rate.Finally, the lower correlation of some instruments with the S-SCIDAR may result from the fact that S-SCIDAR measures wind direction and corrects line-of-sight wind speed measurements to the wind speed parallel to the ground, which other instruments cannot do.
Influence of wind direction
Previous studies have observed that wake produced downwind of large telescope structures can have a significant effect on seeing conditions (Sarazin et al. 1990).Additionally, seeing at the 1998-DIMM tower has historically been stronger than that observed by the UTs for north-easterly and south-easterly winds (Sarazin et al. 2008).A later study by Lombardi et al. (2010) related this phenomenon to an increase in the strength of the surface layer.We therefore expect wind direction to influence the agreement between instruments in this campaign.The wind rose, figure 6, shows the distribution of wind speeds and directions measured 30 m above the ground by the meteo-tower between sunset and sunrise for all six nights of the campaign.The 30 m measurement is used over the 10 m measurement to minimise bias introduced by the Unit Telescopes (UTs) to the South and the VST to the SSW.The radial extent of the bars represents the fraction of the data with a given wind direction and it suggests, similar to previous studies such as Lombardi et al. (2009), that it is mainly from the NNE.
Figure 7 shows how the bias between pairs of instruments changes as a function of wind direction for eight directional bins.In addition, the error bars indicate the bias-corrected RMSE of the comparisons for each wind direction.Due to insufficient data for some wind directions, the correlation is not plotted.Additionally, there were no S-SCIDAR data points between South and West and insufficient data for all instruments for the West bin.These points have therefore been omitted.Seeing measurements during the campaign appear to be strongly influenced by wind direction.For instrument pairs other than the S-SCIDAR and MASS-DIMM, the RMSE of instrument comparisons is larger for northerly winds.The RINGSS bias appears sensitive to the wind direction with the largest bias corresponding to north-westerly winds, but the 24hSHIMM does not follow the same pattern -only seeing a larger bias compared to the MASS-DIMM towards the North-West.However there are few data points for this bin.This figure does not take into account instrument pointing direction, which can also lead to discrepancy in measurements.As this sample of six nights is relatively small, the influence of pointing direction was investigated instead through analysing the median and standard deviation of seeing measured by the 2016-DIMM for all data in the ESO archive.This analysis showed a clear increase in median seeing for north-easterly and south-easterly winds for all pointing angles.Features strongly dependent on pointing angle included: larger variability at low elevation angles when the DIMM points SE and wind blows from the W and SW, and for the DIMM pointing SW while the wind blows from the North.The larger spread of data and bias for northerly winds experienced by the 24hSHIMM and RINGSS may be related to their proximity to the edge of the platform, as shown in figure 1, as air from the ground level will be driven up the mountain and mix with cooler air at the platform.By contrast, wind from the South will traverse the platform before reaching the 24hSHIMM and RINGSS.The S-SCIDAR vs MASS-DIMM seeing comparison has no identifiable dependence on wind direction which is expected as both instruments are raised above the ground and located away from the platform edges and buildings.
For the free atmosphere seeing and isoplanatic angle, dependence on wind direction at 30 m seems unlikely as both parameters are insensitive to ground layer turbulence.In reality, non-Kolmogorov turbulence in the surface layer which may arise from interaction of wind with buildings or heat sources can "confuse" turbulence monitoring instruments that expect a specific power spectrum (typically Von Karman or Kolmorogov), thus leading to inaccuracies in the characterisation of the turbulence profile that may depend on wind direction.Such effects are also encountered at low wind speeds and have been identified at the site by the SLODAR (Butterley et al. 2020b).Figure 6 shows that for southerly winds, a wind speed of less than 3 ms −1 is proportionally more frequent.For the coherence time, which is also dependent on the vertical wind speed profile, the biases are small relative to the spread of the data, except for the SW which may result from a small number of samples.The wind direction does not seem to have a significant influence on the bias or RMSE of these comparisons, however there is a trend towards a larger negative bias for most instrument comparisons in the NE to SW section of the graph.A full treatment of wind directional discrepancies at Paranal would require a significantly larger data set and is beyond the scope of this study.
Optical turbulence profiles
Optical turbulence profiles are characterised by the refractive index structure constant 2 as a function of vertical height above the ground.The instruments in this study record the sum of 2 over a given volume dℎ for each layer using an inversion process.To facilitate a comparison between all instruments which use different models and layers, the RINGSS, MASS-DIMM and 24hSHIMM are directly compared with the high-resolution S-SCIDAR profiles through binning using instrument response functions.
The response functions dictate the measured 2 (ℎ) dℎ response to a single, thin turbulent layer placed at any height throughout the atmosphere.These functions are typically evaluated in simulation by passing a single, thin layer from the ground to the upper atmosphere and plotting the 2 (ℎ) dℎ measured by the instrument in each altitude bin.For scintillation-based instruments such as RINGSS, S-SCIDAR and MASS the response functions usually manifest as triangles on a log scale of height, centred on the altitude of the turbulent layer reconstructed and crossing adjacent bins at half of the input turbulence strength (Tokovinin et al. 2003;Tokovinin 2021).
For the 24hSHIMM, this approximation also holds well, except for between the ground layer and the first layer.The response functions (ℎ) for the 24hSHIMM and RINGSS are displayed in figure 8 on a linear scale of height.These instruments, as well as MASS, estimate turbulence strength in discrete layers as 2 (ℎ )dℎ = ∫ (ℎ) 2 (ℎ)dh.The response functions for the MASS can be found in Kornilov et al. (2003).
Figure 9 displays a box and whisker plot of optical turbulence profile measurements from the 24hSHIMM, RINGSS and MASS- DIMM compared with contemporaneous S-SCIDAR profiles.The S-SCIDAR profiles have been binned down to the instrument layers using the response functions and only data within ±2 minutes of an S-SCIDAR measurement have been used.The whiskers represent the 5th and 95th percentiles of the distribution, the median is shown as a dashed black line and the mean as a solid magenta line.It is therefore possible to simultaneously compare mean profiles and distributions of measurements in individual layers.Figure 9 indicates that all instruments measure a significantly stronger ground layer than the equivalent S-SCIDAR measurement.
A notable feature of the MASS-DIMM profile is a significant underestimation in the 8 km layer, which appears to be the driving cause of the smaller value of median free-atmosphere seeing.For RINGSS and the 24hSHIMM, some layers register zero 2 (ℎ) dℎ , hence anomalous boxes and whiskers such as the 4km layer for the 24hSHIMM and 2 km layer in RINGSS on a a log-scale of 2 (ℎ) dℎ .Mean values however agree well for the free-atmosphere layers.
Figure 10 shows a detailed comparison between vertical turbulence profiles measured by RINGSS with all 611 available S-SCIDAR profiles matched in time and resolution.Despite different locations and different target sources, we note a strong agreement of timing and localisation of strong turbulence packets, especially in the 0.5 and 1-km layers.The ground layer is not included in this comparison.Figure 11 shows a similar plot for the 24hSHIMM.It suggests that the correlation between lower-altitude layers is higher than for highaltitude layers, evidencing the low correlation in isoplanatic angle.
Day and night measurements
The 24hSHIMM measures OT profiles continuously for 24-hours a day by operating at short-wave infrared wavelengths.Compared to the visible light, this extends the validity of the weak-scintillation assumption and reduces the sky background.Additional techniques for rapid background subtraction (Griffiths et al. 2023) are also employed to ensure accurate photometry.
Figure 12 shows a continuous plot of the three main integrated turbulence parameters estimated by the 24hSHIMM: seeing, isoplanatic angle and coherence time.Because the instrument produced a measurement every 1-2 minutes, for presentation purposes the data have been binned such that each data point represents the average of any measurements that fall into ten-minute bins.The sharp diurnal variation in seeing is immediately evident from the graph, with a repetitive, sharp drop in the seeing after sunset leading to the best conditions in earliest part of the night.The general trend thereafter appears to be a gradual increase in the seeing until just after sunrise where it rises very strongly.More work is needed to understand the underlying processes behind this behaviour and the influence of meteorological parameters.
The median value of the daytime seeing, calculated between sunrise and sunset, was found to be 2.65 ′′ , isoplanatic angle 2.05 ′′ and coherence time 2.4 ms.It is notable that measurements of the isoplanatic angle, which is insensitive to low-altitude turbulence, do (ℎ)dℎ profile measurements for all instruments with contemporaneous measurements from the S-SCIDAR.The red boxes show the instrument data from each fitted layer, and the adjacent blue boxes the contemporaneous measurements (within +/-two minutes) from the S-SCIDAR which have been binned to match the instrument layers using the response functions.The extent of coloured boxes represents the first and third quartiles, the dashed line the median measurement, the magenta line the mean, and the whiskers the fifth and 95th percentiles of the distribution.From top left to bottom right, the plot shows the mean S-SCIDAR profile, and box and whisker plots for the 24hSHIMM, RINGSS and MASS-DIMM compared with S-SCIDAR.Significantly smaller values in the top-left panel, compared to other panels, are explained by the thinner dℎ = 0.25 km layers of the S-SCIDAR profiles.
not experience the same distinct variation.This suggests that the increased turbulence strength during daytime is a result of solar heating at the ground affecting the boundary layer, and the upper atmosphere is relatively unaffected.The coherence time follows a similar trend to the Fried parameter likely due to dominance of the strong ground layer turbulence.
CONCLUSIONS
An optical turbulence monitoring campaign has been carried out at Cerro Paranal observatory between the 27th February and 5th March 2023.The aim of this study was to characterise novel turbulence monitoring instruments, the 24hSHIMM and RINGSS, against ex- isting instruments at the site through comparison measurements of vertical OT profiles and integrated parameters including the seeing, free-atmosphere seeing, isoplanatic angle and coherence time.
Data collected from these two instruments during the campaign were further compared against measurements from the S-SCIDAR and the MASS-DIMM by assessing the RMSE, bias and correlation of contemporaneous data from pairs of instrument.Additionally median values from the whole campaign were calculated and compared to long-term averages.
It was found, as in previous campaigns, that the seeing measured near the old 1998-DIMM tower was significantly larger than for the S-SCIDAR and 2016-MASS-DIMM.In general, however, strong correlation was found across all seeing and free-atmosphere seeing measurements.Isoplanatic angle measurements displayed a close agreement in median values, but were less correlated between all instruments, which is likely a result of limitations in sensitivity to high altitude turbulence and differences in the sampled turbulence volumes.Coherence time measurements were strongly correlated between all instruments, however the RMSE of distributions was relatively large.The influence of wind direction on statistical agreement between measurements was also investigated which showed increased spread and bias in RINGSS and 24hSHIMM seeing comparisons with the MASS-DIMM for northerly winds.Additionally, changes in bias for parameters that should have no dependence on the wind direction could be attributed to non-Kolmogorov effects.
The accuracy of OT profiling was also investigated by comparison of profiles with contemporaneous S-SCIDAR measurements binned using instrument response functions.The two visiting instruments were found to agree well with the S-SCIDAR, with expected bias towards stronger turbulence in the ground layer.It was also observed that the MASS-DIMM systematically underestimates the 8 km layer.
Finally, the first measurements of continuous optical turbulence parameters at Paranal were presented which indicate a predictable and extreme diurnal variation in seeing with a median daytime value of 2.65 ′′ compared to equivalent night-time median of 0.88 ′′ , which is assumed to be driven by changes in the boundary layer due to solar heating in the early morning and rapid cooling in the evening as similar changes are not present in the isoplanatic angle which is sensitive to high altitude turbulence.This experiment suggests that the best seeing conditions are in the earliest part of the night.
Figure 1 .
Figure 1.Location of turbulence monitoring instrumentation described in section 2. Instruments relevant to this study are indicated by red circles.Original image credit: ESO.
Figure 2 .
Figure 2. Comparison of contemporaneous seeing measurements during the campaign from the DIMM, S-SCIDAR, 24hSHIMM and RINGSS.
Figure 3 .
Figure 3.Comparison of contemporaneous free atmosphere seeing measurements during the campaign from the MASS-DIMM, S-SCIDAR, and RINGSS.
Figure 4 .
Figure 4. Comparison of contemporaneous isoplanatic angle measurements during the campaign by the MASS-DIMM, S-SCIDAR, 24hSHIMM and RINGSS.
Figure 5 .
Figure 5.Comparison of contemporaneous measurements of the atmospheric turbulence coherence time by the MASS-DIMM, S-SCIDAR, 24hSHIMM and RINGSS.
Figure 6 .
Figure 6.A wind rose displaying the distribution of wind speeds and directions measured 30 m above the ground by the Paranal meteo-tower for the six nights of the campaign.
Figure 7 .
Figure 7.A plot showing the bias of measurements for all four integrated turbulence parameters, and the RMSE indicated by the error bars, as a function of wind direction for key pairs of instruments compared in this study.For the seeing, only DIMM data is used, but for other parameters the MASS-DIMM results use the same line style.The legend indicates the Y -X instrument pair for which the bias and RMSE have been plotted.
Figure 8 .
Figure 8.A plot of the response functions for the 24hSHIMM and RINGSS.The alternating line styles differentiate the response functions of each reconstructed layer.The sum of responses from all layers is approximately one.
Figure 9 .
Figure 9.A comparison of 2 (ℎ)dℎ profile measurements for all instruments with contemporaneous measurements from the S-SCIDAR.The red boxes show the instrument data from each fitted layer, and the adjacent blue boxes the contemporaneous measurements (within +/-two minutes) from the S-SCIDAR which have been binned to match the instrument layers using the response functions.The extent of coloured boxes represents the first and third quartiles, the dashed line the median measurement, the magenta line the mean, and the whiskers the fifth and 95th percentiles of the distribution.From top left to bottom right, the plot shows the mean S-SCIDAR profile, and box and whisker plots for the 24hSHIMM, RINGSS and MASS-DIMM compared with S-SCIDAR.Significantly smaller values in the top-left panel, compared to other panels, are explained by the thinner dℎ = 0.25 km layers of the S-SCIDAR profiles.
Figure 10 .
Figure 10.Turbulence profiles measured simultaneously by RINGSS (upfacing blue bars) and S-SCIDAR (down-facing magenta bars).S-SCIDAR is matched in resolution and time to RINGSS with the sample number indicating the nth S-SCIDAR measurement taken during the campaign.The width of each band is 5 × 10 −13 m 1/3 .
Figure 11 .
Figure 11.Turbulence profiles measured simultaneously by 24hSHIMM (upfacing blue bars) and S-SCIDAR (down-facing magenta bars).S-SCIDAR is matched in resolution and time to 24hSHIMM with the sample number indicating the nth S-SCIDAR measurement taken during the campaign.The width of each band is 2 × 10 −13 m 1/3 .
Figure 12 .
Figure 12.Integrated parameters measured by the 24hSHIMM during the campaign.The black line represents 24hSHIMM measurements, the red line DIMM measurements for seeing and MASS-DIMM for the coherence time and isoplanatic angle, and the blue line the RINGSS.All data sets have been binned to 10-minute intervals for presentation and dates are in UTC.The white, grey and light grey shades of the background represent daytime, night and twilight respectively
Table 2 .
Summary of statistical comparison parameters all graphs.
tion into account for observing at lower zenith angles, saturation of scintillation produced by the highest-altitude layers is an additional source of error for monitors based on weak-scintillation theory.The exception in this experiment being the RINGSS and MASS which implement a correction process.This combination of factors is likely to explain the smaller correlation observed in measurements from the four instruments, while the median values agree fairly closely. | 9,594.2 | 2024-02-10T00:00:00.000 | [
"Physics",
"Environmental Science",
"Engineering"
] |
Midnight sector observations of auroral omega bands
[1] We present observations of auroral omega bands on 28 September 2009. Although generally associated with the substorm recovery phase and typically observed in the morning sector, the features presented here occurred just after expansion phase onset and were observed in the midnight sector, dawnward of the onset region. An all‐sky imager located in northeastern Iceland revealed that the omega bands were ∼150 × 200 km in size and propagated eastward at ∼0.4 km s while a colocated ground magnetometer recorded the simultaneous occurrence of Ps6 pulsations. Although somewhat smaller and slower moving than the majority of previously reported omega bands, the observed structures are clear examples of this phenomenon, albeit in an atypical location and unusually early in the substorm cycle. The THEMIS C probe provided detailed measurements of the upstream interplanetary environment, while the Cluster satellites were located in the tail plasma sheet conjugate to the ground‐based all‐sky imager. The Cluster satellites observed bursts of 0.1–3 keV electrons moving parallel to the magnetic field toward the Northern Hemisphere auroral ionosphere; these bursts were associated with increased levels of field‐aligned Poynting flux. The in situ measurements are consistent with electron acceleration via shear Alfvén waves in the plasma sheet ∼8 RE tailward of the Earth. Although a one‐to‐one association between auroral and magnetospheric features was not found, our observations suggest that Alfvén waves in the plasma sheet are responsible for field‐aligned currents that cause Ps6 pulsations and auroral brightening in the ionosphere. Our findings agree with the conclusions of earlier studies that auroral omega bands have a source mechanism in the midtail plasma sheet.
Introduction
[2] Auroral omega bands were first reported as a distinct class of auroral structure by Akasofu and Kimball [1964].Originally, the name referred to the distinct, undulating shape of the auroral arc, which resembled an inverted Greek letter W.However, over nearly 50 years of usage, the classification has gradually evolved.For example, whereas Akasofu and Kimball's omega bands were distorted arcs, Lyons and Walterscheid [1985] presented observations of omega bands with a dark, inverted W shape formed by bright torches extending poleward from the auroral oval, and Opgenoorth et al. [1994] reported "streets" of multiple omega band structures in which undulations on the poleward boundary gave rise to alternating bright humps and dark bays.Lühr and Schlegel [1994] described omega bands as "a luminous band from which tongue-like protrusions extend toward the north" with the bright tongues shaped like a Greek W and the dark area separating adjacent tongues shaped like an inverted W. In recent research, the term omega band has been used to described all of the above variants on what is assumed to be the same basic auroral structure [Syrjäsuo and Donovan, 2004;Safargaleev et al., 2005;Vanhamäki et al., 2009].
[3] Regardless of the exact auroral configuration, omega bands exhibit many common properties.Omega bands and magnetic pulsations in the Ps6 wave band (4-40 min periodicity) are usually observed simultaneously [Kawasaki and Rostoker, 1979;André and Baumjohann, 1982], with magnetic disturbances interpreted as evidence of the passage of field-aligned currents within the auroral structures [Lühr and Schlegel, 1994;Wild et al., 2000].Omega bands, typically 400-1000 km in size, are usually observed propagating eastward (i.e., dawnward) at speeds of 0.4-2 km s −1 in the morning sector auroral zone and are generally associated with the recovery phase of magnetospheric substorms [e.g., Vanhamäki et al., 2009, and references therein].
[4] While the distribution of quasi-stationary, fieldaligned currents within omega bands is broadly understood [Lühr and Schlegel, 1994;Wild et al., 2000;Amm et al., 2005;Kavanagh et al., 2009], the mechanism responsible for omega band formation remains unclear.The reader is directed to Amm et al. [2005] for a useful review of the various models proposed to explain omega band generation.These models include energetic particle precipitation in the morning sector originating from the outer edge of the ring current region [Opgenoorth et al., 1994], an electrostatic interchange instability developing at the poleward (tailward) edge of a torus of hot plasma in the near-Earth magneto-sphere during the substorm recovery phase [Yamamoto et al., 1997], and the structuring of magnetic vorticity and field-aligned currents via the Kelvin-Helmholtz instability [Janhunen and Huuskonen, 1993].
[5] In this paper, we present space-and ground-based measurements of omega bands observed during the night of 27-28 September 2009.The omega bands studied are slightly unusual in that they were observed in the midnight (21-03 MLT) sector ionosphere, rather than the morning (03-09 MLT) sector, and occurred shortly after a substorm expansion phase onset/intensification (rather than during a substorm recovery phase).Our investigation of these somewhat atypical omega bands reveals that unlike previously reported examples, they are relatively small and slow moving.Although in situ field and plasma measurements from the conjugate region of the magnetosphere indicated enhanced but variable Alfvénic Poynting flux and bursts of field-parallel moving electrons, a clear one-to-one correspondence with individual omega bands was not observed.In the this paper, we first introduce the experimental instrumentation used in our study, then present the upstream, ground-and space-based observations before discussing and summarizing our findings.
Instrumentation
[6] Figure 1 shows the disposition of spacecraft used in this study.Upstream solar wind and interplanetary magnetic field (IMF) conditions were provided by a single probe of the NASA Time-History of Events and Macroscale Interactions during Substorms (THEMIS) mission [Angelopoulos, 2008]; magnetospheric plasma and magnetic field measurements came from the four satellites of the ESA Cluster mission [Escoubet et al., 1997[Escoubet et al., , 2001]].Figure 1 shows the location of these spacecraft at 0000 UT on 28 September 2009 in the X-Z and X-Y GSM planes, with the position of each indicated by the labeled symbols.Also indicated for reference are magnetic field lines derived from the Tsyganenko 2001 model [Tsyganenko, 2002a[Tsyganenko, , 2002b]], hereafter referred to as the T01 model, and a model magnetopause [after Shue et al. 1997].The solar wind and IMF parameterization of these models is discussed further in section 3. The present study exploits ion plasma data from the electrostatic analyzer (ESA [McFadden et al., 2008a, 2008b]) and magnetic field data from the fluxgate magnetometer (FGM [Auster et al., 2008]) on the THEMIS C probe in order to monitor the solar wind and IMF, respectively.During the interval of interest, THEMIS C (indicated by the black square in Figure 1) was located in the solar wind ∼22 R E upstream of the Earth, approximately in the Earth's orbital plane but offset from the Sun-Earth line by ∼4 R E in the dawnward direction.
[7] At 0000 UT on 28 September 2009, the four Cluster satellites were moving tailward and southward toward apogee in the postmidnight sector magnetosphere.Clusters 1, 3 and 4 (indicated by the black, green and blue circles, respectively) were located in the northern tail lobe between 6 and 8 R E downtail of the Earth at ∼0130 magnetic local time (MLT).Cluster 2 (indicated by the red circle) was somewhat farther downtail at a radial distance ∼9 R E and a slightly earlier magnetic local time of ∼0040 MLT.In this study we exploit magnetic field measurements made by the Cluster fluxgate magnetometer experiment (FGM [Balogh et al., 1997[Balogh et al., , 2001]]), electron plasma observations made by the Cluster plasma electron and current experiment (PEACE [Johnstone et al., 1997;Owen et al., 2001]) and electric field measurements from the electric fields and waves instrument (EFW [Gustafsson et al., 1997[Gustafsson et al., , 2001]]).
[8] Ground-based auroral observations were provided by a new all-sky imager (ASI) located on the Tjörnes peninsula in northeastern Iceland (66.2°N, 17.1°W, geographic coordinates).This color "Rainbow" imager [Partamies et al., 2007] is similar in both design and operation to those of the THEMIS ground-based observatory (GBO) array; the main difference is the use of a color CCD imager to provide color all-sky images (THEMIS GBOs produce only gray scale images).Images are automatically recorded at a rate of 10 frames per minute during hours of darkness, yielding a 6 s cadence.Two additional imagers deployed at þykkvibaer (southwestern Iceland) and Tórshavn (Faroe Isles) were not used in this study due to unfavorable weather conditions at those sites during the period of interest.
[9] Observations of ionospheric flow were derived from the Iceland East radar of the Super Dual Auroral Radar Network (SuperDARN [Chisham et al., 2007]).This coherent scatter, high-frequency radar, located at þykkvibaer in southwestern Iceland, one half of the Co-operative UK Twin-Located Auroral Sounding System radar pair (CUTLASS [Lester et al., 2004]), has a field of view (FOV) that extends northeastward, covering an area over 3 × 10 6 km 2 .In standard operations the FOV comprises 16 discrete beams separated by 3.24°in azimuth, with each beam subdivided into 75 individual range bins 45 km in length.Like all SuperDARN radars, the Iceland East radar is a frequency agile system (8-20 MHz) that routinely measures the line-of-sight (LOS) Doppler velocity and spectral width of, and the backscattered power from, ionospheric plasma irregularities.However, this particular radar has been equipped with a so-called "stereo" capability, enabling two beams to be sounded simultaneously by interleaving two transmitted pulse sequences at slightly offset frequency channels.During the interval of interest, the stereo capability was deployed to sound the full FOV (i.e., scanning through beams 0, 1, 2, 3..15 in sequence) using channel A while sounding only one beam direction (beam 5) using channel B. Given a 3 s dwell time on each beam (and allowing for radar integration and minute timing synchronization with other SuperDARN radars), this mode returned a full scan of the complete FOV every minute (via channel A) and measurements along the high-resolution beam every 3 s (via channel B).In this study, we shall focus on measurements from the high time resolution channel (B).
[10] Finally, to reveal the magnetic perturbations associated with auroral features observed by the above experiments, we exploit 1 s resolution ground magnetic field measurements from a fluxgate magnetometer colocated with the Tjörnes Rainbow ASI and deployed by the Japanese National Institute of Polar Research (NIPR) [Sato and Saemundsson, 1984].
[11] Figure 2 shows the distribution of the instruments employed in this study.The FOV of the Tjörnes ASI is indicated by the shaded dark gray circle.Specifically, this corresponds to the FOV projected to 110 km altitude and for look directions within 80°of the zenith (disregarding the portion of the FOV within 10°of the horizon where line-ofsight projection gives rise to the greatest uncertainties).The full FOV of the Iceland East SuperDARN radar (sounded by channel A) is shown by the light gray shaded region, with the high time resolution beam (beam 5, sounded by channel B) outlined by the gray dotted lines.The locations of the Tjörnes ASI/magnetometer and þykkvibaer radar sites are indicated by crossed circles labeled "TJRN" and "þYKK", respectively.
[12] For reference, the magnetic footprints of the Cluster satellites during the interval from 2200 UT (on 27 September) to 0200 UT (on 28 September) are superimposed on Figure 2.Each satellite's footprint, computed at an altitude of 110 km using the T01 magnetic field model, is colorcoded as in Figure 1 with locations indicated at hourly intervals.The T01 model was selected because it has been optimized to represent the inner and near magnetosphere region (X GSM ≥ −15 R E ) for different interplanetary conditions and ground disturbance levels [Tsyganenko, 2002a[Tsyganenko, , 2002b]].To generate the footprint for each satellite, location information is extracted from the Cluster FGM data set at a temporal resolution of 1 s.The most recent upstream (P SW , IMF B Y , IMF B Z observed by THEMIS C) and geomagnetic (Dst) data are then selected as inputs to calculate the footprint positions at a 1 s resolution.
Interplanetary Conditions
[13] Figure 3 presents an overview of upstream interplanetary magnetic field (IMF) and solar wind conditions for the 4 h interval spanning midnight on 28 September.These measurements, recorded by the THEMIS C probe, are important in two key respects.First, they indicate the likely energy and momentum input to the magnetosphere during the interval in question.Second, these upstream observations parameterize the T01 magnetic field model used to estimate the magnetic footprints of the Cluster satellites (as shown in Figures 1 and 2).Of particular relevance are the solar wind plasma and interplanetary magnetic field engulfing the dayside magnetosphere.As such, the data presented in Figure 3 are time shifted (or lagged) to account for the Earthward propagation from the point of measurement to the dayside magnetopause.For this study, based upon the probe's location (∼12 R E upstream of the magnetopause) and the observed solar wind plasma velocity, upstream parameters from THEMIS C are lagged by +3 min in order to present the solar wind and IMF conditions impinging upon the dayside magnetopause.
[14] The B Z component of the IMF was directed southward almost continuously throughout this interval (with a brief northward excursion at 0030 UT) while the IMF B Y component was positive (duskward).Given the generally similar magnitudes of both components, this resulted in an IMF clock angle (defined as arctan(B Y /B Z )) of ∼135°throughout the interval.The B X component was positive throughout (except for a brief negative excursion at ∼2220 UT), indicating that IMF phase fronts were tilted toward the Earth and the overall interplanetary magnetic field magnitude remained between 2.0 and 3.5 nT.The antisunward ion velocity typically ∼325 km s −1 declined sightly over the 4 h interval, while the ion density increased gradually from 12 to 15 cm −3 .As a result, the solar wind pressure varied between 2 and 3 nPa.
Auroral and Ground-Based Measurements
[15] Figure 4 presents an overview of the ground-based measurements used in this study.Figure 4a shows iono-spheric LOS Doppler velocity measured along the high time resolution beam (beam 5) of the SuperDARN Iceland East radar, plotted as a function of universal time and magnetic latitude.Velocity measurements are color-coded according to the color bar on the right side, with positive (green/blue) velocities directed toward the radar and negative (yellow/ red) velocities directed away from it.The magnetic latitude of the ASI zenith is indicated by a dashed horizontal line.Given the orientation of the radar FOV (as indicated in Figure 2), beam 5 does not exactly overlook the Tjörnes Rainbow ASI site.Relative to the ASI zenith, beam 5 crosses the ASI magnetic latitude ∼50 km westward of the site and crosses the ASI magnetic meridian ∼50 km northward of it.Throughout the interval, backscatter was observed at various ranges, but after ∼2330 UT, a persistent band of backscatter was observed between 66.5°and 67.5°( highlighted by the dotted horizontal lines).Figure 4b presents a time series of LOS velocity, averaged over range gates between these latitudes.The vertical axis has been reversed such that negative velocities (corresponding to poleward motion) are represented by values increasing toward the top of the page.
[16] Figure 4c is a keogram derived from the magnetic meridian of the Tjörnes ASI.For clarity, these data have been presented in an inverted gray scale such that areas of dark shading correspond to bright auroral emission.Brightness is presented in a system of arbitrary units because the Rainbow ASI system does not yield calibrated brightness measurements.The magnetic latitude of the ASI's zenith and the upper/lower boundaries over which SuperDARN ionospheric radar velocities are averaged are overlaid onto the keogram as dashed and dotted horizontal lines, respectively.It should be noted that at the Tjörnes ASI site, magnetic local time is approximately the same as local time (MLT = UT + 14 min) such that the universal time annotation on the horizontal axis is a reasonable approximation to the magnetic local time of the meridional observations.Italicized numerals/ letters indicate features discussed below.
[17] Figures 4d-4g show magnetic field data from the NIPR fluxgate magnetometer located at Tjörnes (i.e., colocated with the Rainbow ASI). Figure 4d presents unfiltered "raw" magnetometer data with the H component (black trace and left scale) directed toward magnetic north and the D component (red trace and right scale) directed orthogonally eastward within the horizontal plane.Figures 4e and 4f present the same magnetometer data, but band-pass filtered to reveal fluctuations in the Ps6 pulsation range (with periods between 4 and 40 min) and the Pi2 pulsation range (with periods between 40 and 150 s), respectively.To study the current structures that underlie these magnetic fluctuations, it is necessary to derive a sequence of equivalent current vectors.For an E region current system with a spatial extent greater than the E region height, and assuming a horizontally uniform ionospheric conductivity, the ground magnetic field deflections, b, can be related to an ionospheric equivalent current density, J, by where the H and D subscripts indicate the geomagnetic northward and eastward components, respectively [Lühr and Schlegel, 1994].Figure 4g therefore presents equivalent current vectors derived from Tjörnes magnetometer data, preprocessed by bandpass filtering to retain Ps6 pulsations (as in Figure 4e).Equivalent current vectors pointing ver-tically (horizontally) on the page correspond to northward (eastward) currents, and an eastward 0.1 A m −1 equivalent current vector is shown for scale.For context, Figures 4h and 4i show time series of the auroral electrojet (AE) Figure 4 index (Figure 4h) and both the provisional AU and AL indices from which it is derived (Figure 4i) to indicate global electrojet activity in the auroral zone.
[18] The observations presented in Figure 4 give an overview of the temporal evolution of the auroral features.Before describing these in more detail, it is worthwhile to introduce the spatial evolution of the auroral structures under scrutiny.In Figure 5, we present a summary of the auroral omega bands observed just after midnight on 28 September 2009.Specifically, Figure 5 shows auroral ASI data projected onto a magnetic latitude/magnetic local time coordinate system at 110 km altitude as if viewed from above.Figures 5a-5l show selected color auroral images as recorded by the Tjörnes ASI between 2334:00 UT and 0051:18 UT with the estimated footprints of the Cluster satellites also indicated.In order to aid comparisons between the time series and spatial data (Figures 4 and 5, respectively), key features are commonly labeled.For example, the specific timings of the 12 all-sky images shown in Figures 5a-l are labeled a-l in the ASI keogram presented in Figure 4c.Also indicated is the train of auroral omega bands, labeled i-v in Figures 4 and 5.The timing of events introduced in the discussion session, such as key stages of the observed substorm dynamics and the times at which the four Cluster satellites cross the Tjörnes ASI keogram meridian, are also indicated at the top of Figure 4.
[19] At the start of the interval presented in Figure 4 (23 UT on 27 September 2009), the ground-based observations suggest low geomagnetic activity.The AE index was steady at ∼100 nT and the Tjörnes ground magnetometer observed a relatively undisturbed magnetic field.At this time, the Tjörnes ASI observed only very faint auroral activity characterized by faint, patchy, and diffuse emission poleward of the zenith and a faint east-west aligned arc slightly equatorward of the zenith.This arc (just visible in the keogram presented in Figure 4) had been present for the preceding hour following earlier substorm activity at 2200 UT.Throughout the first ∼45 min of this interval, the Iceland East SuperDARN radar observed limited and sporadic ionospheric backscatter in the vicinity of the ASI FOV, characterized by persistent bursts of equatorward/westward (positive, color-coded blue) flow poleward of 69°magnetic latitude that were not associated with auroral emissions.Equatorward of 68°magnetic latitude, patchy regions of poleward/eastward (negative, color-coded red) flow were observed.Given the relatively short range (∼250 km) at which the SuperDARN Iceland East radar was sounding the auroral oval, it is likely that the radar pulses were being backscattered by E (rather than F) region ionospheric plasma irregularities.
[20] As shown in Figure 3, the IMF was directed southward and duskward throughout this interval.In fact, inspection of a longer time series of upstream data indicates that the IMF B Z component had been southward almost continuously for the preceding 10 h.It is therefore not surprising that the faint arc observed equatorward of the Tjörnes ASI zenith was observed to drift slowly equatorward, consistent with expected motion during the growth phase of a magnetospheric substorm.Inspection of individual ASI images reveals that at 2333:36 UT the faint eastwest aligned arc brightens at the western (duskward) edge of the imager's FOV.This brightening was accompanied by a brief magnetic pulsation in the Pi2 band and was followed by brightening of the entire arc over the next minute (Figure 5a).In the following ∼3 min a second, faint, arc developed just poleward of the existing arc in the western half of the FOV, extending to just eastward of the zenith (clearly visible in the keogram).However, no significant magnetic disturbances were observed at the Tjörnes station and the global geomagnetic indices do not indicate significant geomagnetic activity at this time.
[21] A further, sustained, burst of Pi2 pulsations was observed at 2340:00 UT and over the next ∼5 min, the faint poleward arc brightened and moved poleward.This was accompanied by the onset of a steady decline in the H component magnetic field recorded at Tjörnes and an enhancement of the AE index (due to a sharp decrease in the value of the AL index).At 2344:30 UT, a few minutes after the Pi2 pulsations began, the poleward arc brightened significantly (Figure 5b).As the arc brightened, a sudden increase in the amount of E region ionospheric backscatter was observed in the region of the ASI zenith by the Iceland East radar, with the flow directed strongly (>200 m s −1 ) away from the radar (poleward and eastward) for the next 5 min.
[22] After remaining steady for ∼13 min after 2344:30 UT, the poleward arc brightened dramatically and expanded, starting at 2357:12 UT (Figures 5c and 5d).This intensification in auroral emissions was accompanied by a (colocated) sharp increase in the northward and eastward ionospheric velocity observed in Beam 5 of the Iceland East radar, a further intensification in Pi2 pulsation amplitude and sharp disturbances in the H and D components of the ground magnetic observed at Tjörnes.The auroral breakup and poleward expansion continued over the following minutes (Figure 5e).
[23] During the next ∼45 min a series of undulations or torches were observed propagating eastward through the ASI field of view.Five examples (numbered i-v), indicated in the keogram presented in Figure 4, correspond to omega- shaped torches on the poleward boundary of the visible auroral emission in Figures 5f-5l.
[24] Throughout the period when omega bands were transiting the ASI field of view, strong Ps6 pulsations were recorded by the Tjörnes magnetometer.Although the phasing of H and D component fluctuations varied, after ∼0030 UT, the two components were approximately 180°o ut of phase.When plotted as ionospheric equivalent current vectors, these fluctuations manifest as clockwise rotations in the equivalent current direction.At ∼0100 UT, following the peak in the AE index (due to a minimum in the AL index), auroral emissions underwent another sudden poleward expansion.For the next ∼45 min, multiple pulsating arclets filled the ASI field of view.
Magnetospheric Observations
[25] As indicated in Figure 5, the Cluster quartet entered the Tjörnes ASI field of view from the eastern horizon (moving east to west) when the torch-like auroral features were moving west to east over the ASI.We will therefore examine in situ field and particle measurements from the satellites as they transit the Earth's magnetic tail.
[26] Figure 6 presents field and particle measurements from Cluster 3 between 2300 and 0200 UT.At this stage, we present detailed data from one satellite only, as the measurements are similar across the Cluster quartet.Multisatellite measurements are presented in section 4. Figure 6 (first to third panels) show standard energy-time spectro- grams of electron differential energy flux in directions parallel, perpendicular, and antiparallel to the local magnetic field.These spectra include data from both the high-and low-energy electron analyzers that constitute the PEACE instrument (HEEA and LEEA, respectively) and have temporal resolution equal to the satellite's spin period (∼4 s).
[27] Figure 6 (fourth to seventh panels) present corresponding magnetic field measurements from the Cluster 3 FGM experiment.Although these three-component data are analyzed at a resolution of 5 vectors per second, they have been smoothed by application of a running average window of length equivalent to the satellite spin period to remove high-frequency fluctuations.Figure 6 shows the B X , B Y and B Z magnetic field components in the GSM coordinate system; the residual magnetic field components (DB X , DB Y , and DB Z ) after subtraction of the (T01) model magnetic field from the observed field; B Z component measurements, band-pass filtered to reveal oscillations in the Ps6 pulsation range; B Z component measurements, band-pass filtered to reveal oscillations in the Pi2 pulsation range.
[28] Figure 6 (eighth and ninth panels) present the electric field measurements made by the Cluster 3 EFW instrument and the E × B plasma velocity (V E × B ) based on combined magnetic and electric field measurements.All data are presented according to a common universal time axis that is also labeled in terms of the magnetic latitude and magnetic latitude of the satellite's T01 footprint and its radial distance from the Earth.The time at which the Cluster 3 satellite crossed the central magnetic meridian of the Tjörnes ASI is indicated by a dashed vertical line.
[29] At 2300 UT (the start of the interval presented in Figure 6), Cluster 3, located ∼5 R E from the Earth, was moving southward toward the equatorial plane in the 2 MLT sector.Over the next 3 h, the satellite's elliptical orbit took it southward and slightly dawnward, traversing the inner edge of the plasma sheet and doubling its radial distance from the Earth by 0200 UT.
[30] Throughout the interval, the Cluster 3 PEACE electron detectors observed a population of 1 to 10 keV electron in the field parallel, perpendicular and antiparallel directions (clearest in the field perpendicular energy-time spectrogram).We note that high-energy field antiparallel measurements from the PEACE HEEA sensor are not available throughout.Starting at ∼2355 UT, short-lived bursts of electrons with dispersed energy signatures in the 0.1-10 keV range were observed in the field parallel and antiparallel directions.These electron bursts, each lasting between 5 and 15 min, were observed intermittently until ∼0130 UT.
[31] Magnetic field measurements made by Cluster 3 (Figure 6) indicate the expected decline in magnetic field strength as the satellite receded from the Earth.The residual magnetic field (D B), calculated by subtracting the timeand position-dependent T01 model field (parameterized by upstream data from THEMIS C, as described above), indicates the perturbations from the expected magnetic field.Throughout the 3 h interval presented in Figure 6, the DB X component was relatively small (typically within the 0-10 nT range) with the largest (∼15 nT) residuals occurring during Cluster 3's encounters with the transient field parallel/ antiparallel electron fluxes.The general trend in the DB X component (increasing from 2300 to 0000 UT, decreasing from 0000 to 0100 UT, and increasing again from 0100 to 0200 UT with significant perturbations as the satellite was engulfed by energetic electrons) was repeated in the DB Y and DB Z components.Overall, DB X , the smallest residual, was positive (suggesting that the observed B X was greater than predicted); DB Y was generally larger and positive (suggesting that the observed B Y was greater than predicted); and DB Z was the largest residual and negative (suggesting that the observed B Z was smaller than predicted).We note that the largest residual fields (observed during several particle encounters or more generally after ∼0115 UT) approached ∼50% of the observed magnetic field.The residual magnetic field data presented in Figure 6 also include periodic oscillations.When band-pass filtered with appropriate high-and low-frequency cutoff filters, the magnetic field measurements from Cluster revealed Ps6 and Pi2 pulsation activity, broadly corresponding to the wave activity observed by ground-based magnetometers (we note that for reasons of clarity, Figure 6 only presents band-pass filtered B Z component data, but equivalent activity is observed in all three magnetic field components).
[32] Shortly after 2330 UT, the EFW instrument began to record an increasingly variable electric field.The variability and strength of this field were generally related to the parallel/antiparallel electron fluxes and accompanying magnetic disturbances; that is, the peak electric fields were observed at times when the parallel/antiparallel electron fluxes were enhanced from the background level.Analysis of the electric field data between 2330 and 0130 UT revealed a dominant, 150 s oscillation in all three components.In the magnetic field data, perturbations with ∼900 s periodicity dominate, with lower power peaks in the frequency spectrum corresponding to 450, 300 and 150 s periodicities.When combined to estimate the local plasma velocity, these measurements reveal that V E × B was generally largest at Figure 6.Electron flux and magnetic field measurements from the Cluster 3 satellites.The first to third panels present PEACE energy-time spectrograms of differential energy flux (DEF) parallel, perpendicular and antiparallel to the local magnetic field.DEF is color-coded according to the color bar on the right side.The fourth and fifth panels show the GSM magnetic field components measured by the FGM experiment (B GSM ) and the residual magnetic field components (DB GSM ) that remain following subtraction of the local magnetic field predicted by the T01 magnetospheric field model.The sixth and seventh panels show B Z component data, band-pass filtered to reveal pulsations in the Ps6 and Pi2 frequency ranges, respectively.The eighth and ninth panels present electric field measurements from the EFW experiment and the resulting V E × B , respectively.All panels are plotted according to a common universal time axis.The magnetic latitude and local time of the satellite's footprint, as well as its radial displacement from the center of the Earth, are also indicated.The time at which the satellite's footprint crossed the Tjörnes ASI MLT meridian is indicated by a dashed vertical line.
times when high parallel/antiparallel electron fluxes in the ∼keV energy range were observed.
Discussion
[33] In section 3, we introduced ground-based observations of omega bands propagating eastward along the poleward boundary of an east-west aligned auroral arc.In this section, we examine the bands' structure and evolution in the context of the geomagnetic and magnetospheric conditions that prevailed at the time.
Ionospheric Electrodynamics
[34] The auroral and magnetic measurements presented in Figure 4 clearly indicate substorm activity in the late hours of 27 September 2009.In the hour prior to 0000 UT on the 28 September, typical growth phase conditions were observed during a period of steady southward IMF.Specifically, a quiet auroral arc was observed to drift equatorward for an hour or more before brightening at 2333:36 UT.
Although it occurred at the same time as a weak (∼5 nT peak-to-peak amplitude), short-lived (<3 min duration) Pi2 pulsation, and was followed by a faint poleward drifting arc, this auroral brightening was not accompanied by significant local or global magnetic disturbances.However, the subsequent burst of Pi2 activity, starting at 2240:00 UT, was followed by the brightening and northward expansion of the poleward auroral arc.The auroral dynamics were accompanied by a steady decrease in the H component of the magnetic field, indicating a strengthening of the overhead westward electrojet, and a sharp increase in the AE index, indicating a global intensification of the auroral zone electrojets.After continued growth of the AE index, auroral dynamics and Pi2 activity increased considerably at 2357:12 UT and ground magnetometer data indicated a sudden deepening of the observed H component negative bay.
[35] We interpret these observations as evidence of multistage substorm activity.We suggest that the first stage, starting at 2333:36 UT, was a pseudobeakup that did not evolve into a full substorm.However, the second stage, starting with the Pi2 pulsations observed at 2340:00 UT, developed into a full substorm and marked the onset of the expansion phase.This expansion phase was characterized by poleward moving auroral structures, the brightening and broadening of an equatorward arc and enhanced electrojet currents.The third stage comprised a sharp intensification of the substorm expansion phase at 2357:12 UT, leading to increased currents flowing overhead and a sudden increase in auroral dynamics.The inferred timing of these three stages, labeled "psuedobreakup", "substorm onset" and "substorm intensification", are indicated at the top of Figure 4. Inspection of individual auroral images (Figures 5a-5e) indicates that the substorm expansion phase onset was initiated in the premidnight MLT sector, westward (duskward) of the Tjörnes ASI.This location is consistent with the typical premidnight location of the auroral brightenings associated with expansion phase onset [e.g., Frey et al., 2004].
[36] Within a few minutes of the substorm expansion phase onset and subsequent intensification, auroral omega bands were observed propagating eastward (dawnward) from the onset region.Typically, the auroral structures extended ∼200 km in the north-south direction and ∼150 km in the east-west direction.Based upon their transit time across the ASI field of view, their eastward propagation speed was estimated to be ∼400 m s −1 .At the time these structures were observed, the poleward edge of the main auroral arc was located overhead the Tjörnes ASI such that the omega-shaped torches extended to the north of the zenith (and the northern coastline of Iceland).Nevertheless, the Tjörnes magnetometer (which integrates over a region spanning several hundred kilometers) recorded Ps6 pulsations during the passage of the omega bands.When plotted as ionospheric equivalent currents, these magnetic perturbations are consistent with the passage of vortical ionospheric Hall currents associated with upward/downward field-aligned currents over the magnetometer [Lühr and Schlegel, 1994;Wild et al., 2000].
[37] The average line-of-sight ionospheric flow velocity in the main band of radar backscatter in beam 5 (the region between the dashed lines in the radar/ASI panels of Figure 4) was generally directed away from the radar.Given the orientation of the beam (northward and eastward), the precise direction of this flow cannot be resolved unambiguously.However, the average LOS velocity increased rapidly from zero at the beginning of the interval (when limited backscattered signals were available) to over 200 m s −1 away from the radar during the flow burst, which coincided with the substorm expansion phase onset.A second high-speed flow burst between 0015 and 0020 UT corresponded to a bright auroral transient that formed simultaneously with omega band iii in Figure 4; otherwise, there is no clear correlation between the relatively steady ∼100 m s −1 flow away from the radar and omega band passage.In the case of the omega bands labeled i and iii, there is some evidence of backscatter feature recession from the radar (migration to increasing latitudes) as the auroral structures crossed the ASI meridian.
[38] The enhanced background flow observed is consistent with large-scale convection development during the substorm.No direct relationship with the omega band structures is expected [e.g., Grocott et al., 2002].On the other hand, Grocott et al. [2004] observed the flow signature of a substorm pseudobreakup and concurrent bursty bulk flow in the magnetosphere.In this case the flow signature was of a more vortical nature, being related to the associated field-aligned current system.The similarly vortical nature of the omega band current system could therefore explain the poleward component of the flow features observed in this case.
Magnetic Field Line Mapping
[39] The Cluster 3 field and plasma observations introduced in Figure 6 indicated structured particle fluxes in the magnetotail when the suite of ground-based instruments observed the eastward propagating auroral omega bands.In order to investigate any possible link, we present field parallel electron fluxes observed by all four Cluster satellites (Figure 7) and indicate the footprint location of each satellite relative to the auroral observations discussed above.The latitudinal profiles of the Cluster footprints are overlaid on the ASI keogram and SuperDARN velocity panels included in Figure 4.In terms of the motion of the footprint during the interval under study, the slightly duskward orbital motion of the satellite is less significant that the dawnward Figure 7 rotation of the Earth, which steadily brings the ASI FOV under the Cluster magnetic footprints.As indicated in Figures 1, 2 and 5, Cluster 2 (red) was located farthest from the Earth and at the earliest MLT of the four satellites.Its footprint was therefore located farther west and at the highest magnetic latitude of the four satellites.Cluster 1 (black), 3 (green) and 4 (blue) are located at similar magnetic local times, with 3 and 4 slightly closer to the Earth.These three satellites have magnetic footprints at similar magnetic local times (within 0.2 h of MLT); Cluster 1's footprint is located ∼1°of magnetic latitude poleward of the Cluster 3 and 4 footprints at the start of the interval (reducing to ∼0.5°by the end).
[40] As discussed above, comparisons between the modeled and observed magnetic field at the location of Cluster 3 indicate that the T01 model (exploited to estimate the Cluster magnetic footprints) did not fully reflect the actual magnetospheric field configuration during the interval of interest.Residual magnetic fields suggest that the actual field was more stretched (with larger B X and B Y , but smaller B Z ) than predicted for the T01 model.The sense and temporal evolution of these residuals were consistent with substorm activity inferred from ground-based observations.The residual fields increased during the growth phase as the tail field became increasingly stretched.They subsequently decreased as the tail field dipolarized during the expansion phase, with brief disturbances due to the passage of bursts of energetic electrons moving parallel and antiparallel to the local field line.
[41] As might be expected, this suggests that the reliability of the magnetic field model used is uncertain during the late growth phase and early expansion phase of the substorm.Nevertheless, the model is required to estimate the satellite footprints in the ionosphere.Uncertainties are expected to be greatest during the Cluster 2 passage through the ASI field of view in the early part of expansion phase.However, for operational reasons, electron measurements are only available from Cluster 2 until just after midnight on 28 September, before it had passed over the Tjörnes ASI site.The estimated Cluster 2 footprint (with questionable reliability) was poleward (tailward) of the auroral omega bands and, with the exception of brief, glancing encounters with the poleward boundary of omega band torches, did not traverse auroral structures until the latter part of the interval when the aurora had expanded poleward to fill the ASI field of view.However, the remaining Cluster satellites (1, 3, and 4) were recording throughout the conjunction with the ground-based experiments, and magnetic field line mapping from the magnetosphere to the ionosphere is essential to this study.
[42] To estimate the level of uncertainty in field line mapping during overflights of the remaining Cluster satellites spacecraft through the Tjörnes ASI, the T01 mapping employed in this study has been compared to equivalent mapping using the Tsyganenko 1996 (T96) model [Tsyganenko, 1995].Although it does not yield definitive mapping errors, this benchmarking reveals the extent to which the field line mapping depends on the specific magnetospheric field model selected.We therefore recompute the Cluster footprints using the T96 model using identical input parameters and compare the results to those from the T01 model.Between 2200 and 0200 UT, the average displacement between the Cluster footprints estimated by the two models is ∼0.5 degrees of magnetic latitude and ∼0.25 degrees of magnetic longitude (approximately 1 min of magnetic local time).In the ionosphere (at 110 km altitude) this corresponds to a distance of approximately 50 km.At 0100 UT, when Cluster 1, 3 and 4 were in the vicinity of the Tjörnes ASI meridian, the westward horizontal speed of the footprints at 110 km altitude was ∼0.25 km s −1 , irrespective of the magnetospheric model selected.As such, the ∼0.25°l ongitudinal difference in the T01 and T96 footprints corresponds to a difference in arrival time at a specific magnetic meridian of ∼45 s, with the T01 footprints consistently located slightly poleward and westward of the T96 footprints at any given universal time.We therefore conclude that although the two Tsyganenko field models predict slightly different satellite footprint locations (for a given set of input parameters), the discrepancy is not significant.Ultimately, the choice of magnetospheric model is not critical to the analysis that follows and the selection of the T01 (inner magnetosphere) model is appropriate.
Magnetosphere-Ionosphere Coupling
[43] Although the footprints of the Cluster 1, 3 and 4 satellites were at latitudes comparable to the omega bands, they did not encounter the eastward moving auroral structures until after ∼0019 UT, when residual magnetic fields at the satellites were much reduced compared to those encountered by Cluster 2 (some 30 min earlier).We therefore use the estimated footprints to compare in situ measurements at the remaining satellites and the auroral luminosity at each satellite's footprint.Figure 8 presents this comparison for Clusters 1, 3 and 4.
[44] The brightness in Figure 8 is at the satellite's Northern Hemisphere magnetic footprint.To take into account small uncertainties in the magnetic field mapping, the auroral brightness at each time has been calculated by averaging over a 25 km radius area centered on the estimated footprint position.The diameter of the averaging region is therefore comparable to the displacement found between the footprints yielded by the T01 and T96 magnetic field models in the benchmarking exercise described above.Figure 8 shows the time series of auroral luminosity averaged around each footprint as the satellite overflew the Tjörnes ASI.The black trace indicating auroral luminosity is dotted when the satellite footprint is within 10°of elevation Figure 7. Field-parallel electron differential energy fluxes measured at Cluster 1, 2, 3 and 4 (third to sixth panels).Fluxes are color-coded as a function of universal time and particle energy according to the color bar on the right side.To compare the in situ measurements with ionosphere observations, the first and second panels show the SuperDARN and all-sky imager data presented in Figure 4 on which each satellite's magnetic footprint has been indicated by a colored dashed line (C1, blank/white; C2, red/white; C3, green/white; C4, blue/white).Furthermore, each electron energy-time spectrogram is annotated with the MLT of that satellite's footprint (horizontal axis) and the time at which the footprint crosses the magnetic meridian of the Tjörnes ASI (dashed vertical line).
Figure 8 from the local horizon (where uncertainties in the all-sky projection are most sensitive to the assumed emission altitude) and solid where the footprint is >10°from the horizon.
[45] S k in Figure 8 is the Poynting flux and integrated electron energy flux based on in situ plasma measurements.Electric and magnetic field measurements from the Cluster EFW and FGM instruments have been used to calculate field-aligned Poynting flux, S k , at the satellite location: where B is the local magnetic field and S is the Poynting vector.As described by Keiling et al. [2002], to calculate the Poynting flux vector, perturbation electric (dE) and magnetic (dB) fields are combined; thus, The field-aligned Poynting flux, S k , indicated by the black trace, accounts for the transport of energy along the background magnetic field.The ratio of the integrated electron energy fluxes parallel and perpendicular to the local magnetic field is shown for comparison (dotted green trace).This is computed by summing the differential energy flux (DEF) of over all energy ranges covered by the PEACE instrument in the pitch angle bin containing the local field, then dividing by the equivalent integrated energy flux from the orthogonal pitch angle bin.Values of this ratio >1 indicate that the field-aligned electron energy flux exceeds the field-perpendicular energy flux.Values <1 indicate that the field-perpendicular electron energy flux is greater.
[46] The e − DEF in Figure 8 is the field-aligned differential energy flux (DEF) observed by the PEACE instrument in two representative electron energy ranges.Although the exact energy bins differ very slightly between the PEACE sensors on each satellite, comparable energy levels have been selected in each case.Electron DEF in a ∼100 eV wide energy bin centered on approximately 500 eV is indicated by the blue trace, and DEF in a ∼700 eV wide energy bin centered on approximately 3 keV range is shown by the red trace.The central energies of each bin are indicated in Figure 8.
[47] The footprints of the Cluster 1, 3 and 4 satellites entered the "central" portion of the Tjörnes ASI field of view (>10°from the horizon) at 0025 UT, 0022 UT and 0019 UT, respectively.By the time they had traversed the eastern half of the field of view and arrived at the central meridian of the ASI (indicated by dashed vertical lines at 0102 UT (Cluster 1), 0057 UT (Cluster 3) and 0056 UT (Cluster 4) in Figure 8), a series of omega bands had been encountered.Examination of individual ASI frames indicates that Cluster 3 and 4 cut though the omega bands labeled as iii, iv, and v in Figures 4 and 5 with corresponding peaks in the brightness traces for these satellites (labeled iii-v in Figure 8).We note that the peak at 0051 UT in the Cluster 3 and 4 brightness traces corresponded to an encounter with a narrow arc that briefly formed poleward of the main arc (Figure 5l) and not an omega band attached to the main arc.Cluster 1, at slightly higher latitude, did not pass though any omega band structures.
[48] As discussed above, the Cluster 3 electron energytime spectra presented in Figure 6 reveal a high-energy (1-10 keV) electron population in the magnetotail, evident at all pitch angles.In addition, short-lived enhancements in the differential energy flux carried by electrons in the lower, 0.1-1 keV range in the field parallel and antiparallel directions were observed, starting at ∼0000 UT.Although Figure 6 presents measurements from Cluster 3 only, similar structures were also observed at Cluster 1 and 4 (no Cluster 2 electron data were available after 0005 UT).These data indicate a large differential energy flux of high-energy electrons during the interval in which auroral omega bands were observed.The high-energy population has strongest fluxes perpendicular to the magnetic field, suggesting that it is a largely trapped population.The angular resolution of the PEACE instrument is 15°in the plane parallel to the satellite spin axis and 11.25°in the plane perpendicular to the spin axis.Since the loss cone of precipitating electrons is likely to be ∼3°, the field-aligned energy-time spectra will contain a mixture of precipitating and trapped electrons.Conversely, the lower-energy electrons were only observed in the fieldaligned (parallel and antiparallel) sensor, suggesting that the electrons observed in the parallel pitch angle bin were more likely to precipitate into the auroral zone, with the remainder of that population mirroring at lower altitudes and being observed at antiparallel pitch angles.
[49] The auroral brightness time series for the Cluster 3 footprint shown in Figure 8 includes the clear signatures of three omega bands.These features, which are labeled iii-iv, and v, correspond to similarly labeled features in Figure 5.Note that these features move eastward through Cluster 3's footprint between 0025 and 0050 UT, while the footprint is in the eastern portion of the FOV (i.e., prior to the transit of the satellite through the ASI's magnetic local time meridian, indicated by the dashed vertical line in Figure 8).Between 0000 and 0100 UT (corresponding to the satellite's passage through this eastern portion of the imager), the field-aligned Poynting flux exhibits the same ∼150 s variability observed in the underlying electric field data, with localized peaks in Figure 8. Comparisons between in situ field and plasma measurements at the Cluster satellites and auroral brightness at the ionospheric footprint (Cluster 1, 3, and 4).For brightness plots the black trace shows the auroral brightness (arbitrary units) at the location of the satellite's footprint.The brightness trace is dotted where the footprint is within 10°of the local horizon at the Tjörnes ASI, but is presented as a solid line where the footprint lies more than 10°from the imager's local horizon.The field-aligned Poynting flux S k (derived from electric and magnetic field measurements) is shown in black, and the ratio of field-parallel to field-perpendicular integrated electron flux is shown in dotted green (according to the scale on the right).The blue trace shows the DEF of electrons in the PEACE 500 eV energy bin; the red trace shows the DEF of electrons in the 3 keV energy bin.All panels are plotted according to a common universal time axis and are annotated with the MLT of that satellite's footprint and the time at which the footprint crosses the magnetic meridian of the Tjörnes ASI (dashed vertical line).Auroral omega bands discussed in the text are labeled iii-v.
the envelope centered at 0002 and 0018 UT.The parallelto-perpendicular electron energy flux ratio was typically less than unity (indicating that electron energy flux in the fieldperpendicular direction exceeded that in the field-parallel direction) and displayed similar short-period variability, with peaks at 0006 UT, 0022 UT and 0048 UT.The differential energy flux of electrons in the ∼3 keV energy range remained relatively constant throughout the interval, although modest (up to ∼50%) variations were observed (such as between 0000 and 0010 UT).In contrast, the differential energy flux carried by low-energy electrons (∼500 eV shown in Figure 8) varied by more than an order of magnitude throughout the interval, with the higher-flux intervals corresponding to the bursts of parallel/antiparallel electrons shown in the PEACE electron spectra (Figure 6).
[50] As the Cluster 3 footprint approached the Tjörnes ASI central meridian at 0057 UT, the aurora expanded poleward (as shown in the keogram in Figure 6).Consequently, as the satellite traversed the western half of the imager's FOV, it overflew dynamic auroral structures including, for example, a spatially localized brightening at 0101 UT, but no additional distinct omega bands.The auroral brightness at the footprint was high and variable over the following half hour until the satellite left the FOV.The field-aligned Poynting flux estimated from electric and magnetic field data was markedly lower during this interval than during the preceding hour, but fluctuations in the ratio of parallel-to-perpendicular integrated electron energy flux continued, driven by bursts of both high-and low-energy field-aligned electron flux.After ∼0130, the lower-energy electron DEF declined sharply, whereas the higher-energy electron DEF increased very slightly.
[51] To summarize the relevant Cluster 3 measurements, the calculated field-aligned Poynting flux varied rapidly throughout the interval when the satellite was conjugate to omega bands iii-v.The energy flux ratio between electrons moving in the field-parallel and field-perpendicular directions indicates that more flux was included in the latter, but the ratio fluctuated upward several times during the 0025 to 0100 UT interval in which the three omega bands were observed.Given the generally steady field-perpendicular differential energy flux observed during this period (as shown in Figure 6), this increase in the field parallel/ perpendicular ratio indicates enhancements in the fieldparallel direction.An exception is the sharp increase in the flux ratio at 0048 UT due to a simultaneous increase in the field-parallel flux and a decrease in the field perpendicular flux (clearly apparent in Figure 6).Scrutiny of individual electron energy channels of the PEACE instrument reveals that this increase in the amount of field-parallel differential energy flux is associated with large (factor of 10) enhancements in the flux of lower-energy electrons (illustrative ∼500 eV electrons shown in Figure 8).
[52] Any one-to-one correspondence between the auroral omega bands at the footprint of the Cluster 3 satellite and field and plasma measurements is not obvious.The fluctuations in Poynting flux indicate variable transport of energy along the background magnetic field toward the ionosphere, and the particle data indicate intervals of increased electron energy flux in the field-parallel direction, mainly carried by low-energy electrons (<1 keV).There is a suggestion of localized peaks in the flux ratio as the satellite passed over omega bands iii-v, but these are by no means the greatest flux ratios observed.The large peak in the flux ratio at 0048 UT follows omega band v by ∼4 min but precedes a short-duration enhancement in auroral brightness at 0051 UT.As noted previously, inspection of individual ASI frames reveals that this enhancement is a due to a narrow (∼10 km), faint, and short-lived (∼2 min) arc that appeared poleward of the main region of auroral emission (as shown in Figure 5l).
[53] Perhaps unsurprisingly (given the proximity of the satellites and their footprints), the Cluster 4 field and plasma measurements are very similar to those from Cluster 3. The satellites encountered the three omega bands (iii-v) prior to crossing the central meridian of the Tjörnes ASI (i.e., during the interval 0025 to 0050 UT) and also crossed the shortlived faint arc poleward of the main auroral emission at 0051 UT and the localized brightenings within the expanded area of auroral emissions after ∼0100 UT (all observed by Cluster 3).Variations in the field-aligned Poynting flux, the parallel-to-perpendicular integrated electron energy flux and the high (∼3 keV) and low (∼500 eV) electron differential energy fluxes were very similar to those observed by Cluster 3. The magnetospheric field and plasma structures observed by these satellites must therefore have spanned a region of the magnetosphere comparable in size to the satellite separation distance (approximately 1100 km throughout the interval presented in Figure 6).
[54] Previous studies [e.g., Wygant et al., 2000;Keiling et al., 2002] have compared in situ magnetospheric electric and magnetic field measurements with space-based auroral imagery to study the relationship between Alfvén wave Poynting flux and auroral features.In the case presented above, there was no one-to-one correspondence between peaks in Poynting flux observed at Cluster and peaks in auroral brightness due to omega bands.However, the average field-aligned Poynting flux measured by Cluster 3 and 4 during the first hour of the interval presented in Figure 8 (when omega bands were observed at the satellite footprints) was more than five times higher than during the following hour (when no omega bands were observed).Also, the average field-aligned Poynting flux observed by Cluster 1 during the 0000 to 0100 UT period (as it overflew the region poleward of the omega bands) was only 20-30% of that observed by Cluster 3 and 4 (as they cut through the omega bands).
[55] To identify Alfvén wave activity at the satellite location, we compared the ratio of the two perpendicular perturbation fields, dE and dB, to the local Alfvén speed [Keiling, 2009].Given the significant residual field that remained after the T01 model magnetic field was subtracted from the Cluster data (Figure 6), we have not used the T01 to determine the local magnetic field direction.Instead, we estimate B at the satellite by applying a 10 min running average to the FGM data.Electric and magnetic field vectors are then transformed into a field-aligned coordinate system (l, m, n), such that the n axis is directed parallel to B; the m axis is perpendicular to the n axis and the Z GSM direction; the l axis completes the right-handed set and is perpendicular to both m and n.Given the approximately Earthward directed field at the satellite's position, l is directed perpendicular to B and approximately northward, m is directed perpendicular to B and approximately eastward.
[56] Figure 9 shows example hodograms, derived from the field-perpendicular electric and magnetic field fluctuations measured by Cluster 3. The approximately circular loci of points demonstrate that the phase relationship between the dB l and dE m components (and the relationship between dE l and dB m components) is ∼90°.This suggests that the observed field perturbations are a result of propagating shear Alfvén waves.Between 0030 and 0100 UT, the fieldperpendicular component of the electric field perturbations (dE ? 2 = dE l 2 + dE m 2 ) varied between 0.5 and 2.0 mV m −1 , with a mean of 1.0 mV m −1 ; the equivalent component of the magnetic field perturbations (dB ? 2 = dB l 2 + dB m 2 ) varied between 0.5 and 5.0 nT, with a mean of 2.1 nT.The resulting E/B fluctuation ratio varied between 100 and 2500 km s −1 , with a mean ratio of 1062 km s −1 .This is comparable to the local Alfvén speed of ∼1000 km s −1 , based upon Cluster in situ field and plasma parameters.The field-aligned Poynting flux and the correlated electric and magnetic field perturbations observed at Cluster 3 are thus consistent with the propagation of shear Alfvén waves along the magnetic field.Crucially, the field-parallel Poynting vector at Cluster 3 and 4 is almost always positive, implying wave energy is being transferred from the plasma sheet to the ionosphere and is not reflected.This is consistent with the propagating shear Alfvén waves described by Watt and Rankin [2010] and may account for the source of the accelerated particle energy.
[57] Although a one-to-one causal relationship cannot be found, the overall picture that emerges from the in situ magnetospheric data suggests shear Alfvén wave activity in the plasma sheet-accelerated electrons, typically with energy <3 keV, Earthward along the magnetic field line from a location tailward of the Cluster 3 and 4 satellites (>8 R E downtail).In situ values of Poynting flux can be extrapolated to ionospheric altitudes by multiplying the in situ flux by a factor equal to the ratio of the background magnetic field strength at the location of the in situ measurements to the magnetic field strength at ionospheric altitude [Wygant et al., 2000].Using values of ∼50 nT for the in situ field (observed by Cluster 3 at 0030 UT) and an ionospheric field at 110 km of 50,000 nT, the amplification factor due to the converging field lines in the vicinity of the Earth is ∼1000.Consequently, the field-aligned Poynting flux observed by Cluster 3 and 4 corresponded to a flux at ionospheric altitude of up to 100 mW m −2 , but averaging 17 and 13 mW m −2 for Cluster 3 and 4, respectively, between 0000 and 0100 UT.
[58] In a statistical study of 40 plasma sheet crossings by the Polar satellite, Keiling et al. [2002] warm plasmas (such as the tail plasma sheet), electrons become trapped in shear Alfvén waves, are accelerated, producing field-aligned beams likely to result in auroral brightening.Although the Poynting fluxes observed in this study are at the lower range of those reported by Keiling et al. [2002], we interpret the observation of enhanced and variable Alfvénic Poynting flux, accompanied by bursts of field-aligned electron flux in the plasma sheet during an interval in which auroral omega bands were observed, as evidence that these auroral structures are related to beams of electrons accelerated in the midtail plasma sheet.
Omega Band Structure and Formation
[59] The omega bands observed on the night of 27-28 September 2009 were somewhat atypical in several respects.First, they were observed close to magnetic midnight, whereas the vast majority of previous studies classified omega bands as a morning sector phenomenon.Second, the omega bands presented here were observed a few minutes after a substorm expansion phase onset (based upon magnetic field measurements and global auroral indices), rather than during the substorm recovery phase as is usually reported.Specifically, in this case study, the omega bands appear to emerge from the onset region located in the premidnight sector just westward (duskward) of the Tjörnes ASI.Although relatively small (∼200 km scale size), the omega bands drifted eastward, i.e., away from the onset region and toward dawn, at ∼0.4 km s −1 .Given that Lühr and Schlegel [1994] argued that omega bands and Ps6 pulsations are "essentially the same phenomenon seen by different instruments", the magnetic measurements presented above confirm that the optical signatures observed were indeed omega bands (albeit relatively small, slowly moving examples).
[60] Although previous studies [e.g., Lühr and Schlegel, 1994;Wild et al., 2000] have reported strong ionospheric plasma velocity shears at the boundary between the bright and dark regions of the omega band, there is little evidence of this effect in the features presented here.As indicted earlier, at the relatively short range (∼250 km) at which the SuperDARN Iceland East radar was sounding the auroral oval, it is likely that the radar pulses were being backscattered by E (rather than F ) region ionospheric plasma irregularities.Because of collisions between ionospheric ions and atmospheric neutrals, a two-stream instability limits the speed of the E region electron density irregularities exploited by the radar as backscatter targets [Robinson, 1986].Furthermore, due to the line-of-sight nature of the radar measurements, only a component of the true flow is measured by a single radar.As a result, the radar data presented above may have underestimated the true ionospheric plasma flow velocity.
[61] Uncertainties in the magnetic field model make detailed comparisons between ionospheric and magnetospheric measurements difficult.The superior temporal and spatial resolution of the ground-based auroral images available here (at least an order of magnitude higher than auroral image data yielded by space-based imagers, both spatially and temporally) highlights limitations in the mapping capability.Despite the lack of a one-to-one correlation between auroral features and satellite measurements, the in situ data suggest that electrons accelerated in the midtail plasma sheet powered auroral emissions during the interval in which the omega bands were observed.
[62] An interesting question left unanswered by this study is that of the fate of the omega bands after they left the Tjörnes ASI field of view.It is not clear whether these structures continued to propagate eastward and, if they did, how they evolved as they moved through the morning sector.Although we have been unable to find clear-sky auroral images from Scandinavia for this interval, IMAGE magnetometer data from the Scandinavian sector indicated Ps6 pulsation activity.This raises the possibility that stable omega bands might propagate dawnward over many hours of magnetic local time, retreating from the substorm onset region in the vicinity of the midnight sector.If true, this could account for the general association between omega bands and the substorm recovery phase.If, as in the case study presented above, omega bands are formed in the vicinity of the midnight sector shortly after expansion onset/ intensification, a steady eastward propagation would imply a delay before their observation in the morning/dawn sector.Eastward motion over 4 h of MLT at 0.4-2.0km s −1 would take between 20 and 100 min (at 68°magnetic latitude), implying that faster moving omega bands launched eastward from substorm onset in the midnight sector would arrive in the morning sector during the substorm recovery phase.Given the growing international archive of space-and ground-based auroral imagery that provides regional and global auroral imaging capabilities and multisatellite magnetosphere satellite missions, this question should be resolvable in the future.
[63] Omega bands have traditionally been linked with the substorm recovery phase [e.g., Opgenoorth et al., 1994] and associated with Ps6 magnetic pulsations in ground magnetometer data [André and Baumjohann, 1982;Opgenoorth et al., 1983;Steen et al., 1988].However, a variety of studies have linked these ionospheric phenomena to sources in the magnetosphere.For example, Steen et al. [1988] suggested that variations in the high-energy particle intensity at geosynchronous orbit are responsible for the generation of auroral omega bands, a proposal later supported by the findings of Tagirov [1993], who used the Tsyganenko T89 magnetospheric magnetic field model to demonstrate that auroral torches map to the equatorial plane 5-6 R E from the Earth.Subsequent studies linked omega bands to the magnetotail plasma sheet, with Jorgensen et al. [1999] concluding that they are the electrodynamic signature of the corrugated inner edge of a current sheet in the vicinity of geostationary orbit.Pulkkinen et al. [1991] exploited the Tsyganenko model to show that the omega bands and Ps6 pulsation map to the current sheet approximately 6-13 R E downtail from the Earth.
[64] Despite growing evidence that the source region of omega bands/Ps6 pulsations lies in (or at the boundary of) the current sheet, the source mechanism remains unclear.Proposed mechanisms include the development of the Kelvin-Helmholtz instability at the boundary between the boundary layer plasma sheet and the central plasma sheet due to flow shear [Rostoker and Samson, 1984], an interchange instability developing on the outer boundary of a hot plasma torus [Yamamoto et al., 1997], and spatially periodic electron precipitation caused by field-aligned electric fields generated by waves excited on the corrugated inner edge of the current sheet [Jorgensen et al., 1999].Although the observations presented in our study favor a process that generates earthward Alfvénic Poynting flux in the plasma sheet (tailward of geostationary orbit), it is unclear how an electronaccelerating mechanism in the tail might result in stable, azimuthally propagating auroral features that migrate to the morning/dawn sector, unless the source in the tail also spanned a range of local times.Further simultaneous, multipoint, in situ measurements are required to confidently validate or discount the previously proposed mechanisms (e.g., at azimuthally displaced locations at the inner edge of the plasma sheet).
Conclusions
[65] This study presents space-and ground-based observations of a series of omega bands in the midnight sector auroral ionosphere just after midnight on 28 September 2009.Specifically, this study exploited a ground-based auroral all-sky imager, magnetometer and coherent scatter high-frequency radar to diagnose the electrodynamics of the auroral structures.Simultaneous upstream solar wind and IMF measurements were provided by the THEMIS C probe, and in situ field and plasma measurement from the tail plasma sheet were provided by the four Cluster satellites in magnetic conjunction with the ground-based experiments.The results of the study can be summarized as follows.
[66] 1.A train of at least five clear auroral omega bands was observed, the first occurring within 5 min of a substorm expansion phase intensification during an interval of steady southward and duskward oriented IMF and unremarkable solar wind conditions (∼320 km s −1 Earthward speed and ∼2.5 nPa dynamic pressure).
[68] 3. The optical auroral features were accompanied by Ps6 magnetic pulsations, consistent with the passage of vortical ionospheric Hall currents associated with upward/ downward field-aligned currents over the magnetometer.
[69] 4.There was no compelling evidence that the omega bands were associated with an ionospheric flow shear at the poleward boundary of the main auroral oval, but this cannot be confirmed conclusively due to limited radar backscatter in the region poleward of the main oval.The average ionospheric flow inside the main auroral oval was between 100 and 250 m s −1 away from the radar throughout, consistent with dawnward flow in the dawn cell of the global ionospheric convection pattern.
[70] 5.The Cluster satellites, located in the tail plasma sheet observed transient bursts of electron differential energy flux, including dispersed energy signatures, throughout the interval when omega bands were observed in the vicinity of the satellite footprints.During the conjunction, generally enhanced Alfvénic Poynting flux was observed.Although variable in magnitude, the field-parallel Poynting flux was almost continuously directed toward the Northern Hemisphere.Electron plasma measurements indicated that electrons with energies <3 keV were accelerated in the field-aligned direction.
[71] 6.A one-to-one correlation between in situ plasma observations and the auroral structures was not found, perhaps due to limitations in the magnetospheric field model.
[72] Our observations agree with previous studies suggesting that omega bands have a source mechanism in the plasma sheet, tailward of geostationary orbit.However, the somewhat unusual magnetic local time of the structures presented here and their observation during the early substorm expansion phase hints that these features may not be restricted to the morning sector and the substorm recovery phase (as is often stated in the literature).This is consistent with the findings of Connors et al. [2003], who reported that Ps6 pulsations (considered to be the magnetic manifestation of auroral omega bands) can occur at or very near the time of onset of a substorm expansive phase, a pseudobreakup, or a poleward boundary intensification.
[73] We suggest that a survey of contemporary auroral imagery data sets (such as the archive of the North American THEMIS GBO network) may provide cradle-to-grave observations of omega band formation at substorm onset near the midnight sector and propagation over many hours of MLT into the late morning sector during the substorm recovery phase.Such observations might suggest that the common generation mechanism for Ps6 pulsations and omega bands can be found during the substorm expansion phase, rather than the recovery phase.
[74] Acknowledgments.For the provision of experimental equip- ment and technical support, J.A.W. and E.E.W. are indebted to the auroral imaging team at the Institute for Space Research at the University of Calgary.J.A.W. and E.E.W. were supported during this study by UK Science and Technology Facilities Council grant PP/E001947/1 while R.C.F., A.G., and M.L. were supported by STFC grant ST/H002480/1.SuperDARN operations at the University of Leicester are supported by STFC grant PP/ E007929/1.Cluster operations in the UK were supported by the STFC.We acknowledge NASA contract NAS5-02099 and financial support through the German Ministry for Economy and Technology and the German Center for Aviation and Space (DLR) under contract 50 OC 0302 for the THEMIS data used in this study.J.A.W. is especially grateful to J. Hohl of UCLA for assistance in the preparation of this paper.
[75] Robert Lysak thanks I. Rae and the reviewers for their assistance in evaluating this paper.
Figure 1 .
Figure 1.Locations of the THEMIS and Cluster spacecraft used in this study at 0000 UT on 28 September 2009, projected into the GSM (top) X-Z and (bottom) X-Y planes.Magnetic field lines derived from the T01 magnetospheric field model and the modeled magnetopause location are also shown, as described in the text.
Figure 2 .
Figure 2. The arrangement of ground-based experiments employed in this study.Coastlines are projected in a polar geographic coordinate system, with parallels of constant geomagnetic latitude overlaid at 80°, 70°, 60°and 50°north and geomagnetic meridians overlaid at 15°intervals (dotted lines).The light and dark gray shaded areas show the fields of view of the SuperDARN Iceland East and Tjörnes Rainbow ASI, respectively.The locations of the Tjörnes ASI (labeled TJRN) and the Iceland East radar site at þykkvibaer (labeled þYKK) are also indicated.Colored arcs show the magnetic footprints at 110 km altitude of the four Cluster satellites, color-coded as in Figure 1 (black, C1; red, C2; green, C3; blue, C4) with solid circular tick marks indicating each satellite's position at hourly intervals (note that the 0100 UT tick mark labels for C3 and C4 are omitted for clarity).
Figure 3 .
Figure3.Upstream solar wind and IMF conditions between 2200 UT (on 27 September) and 0200 UT (on 28 September) observed by the THEMIS C probe.From top to bottom, the interplanetary magnetic field strength; B X , B Y , B Z components; IMF clock angle (all in GSM coordinates); plasma ion velocity in the X GSM direction; ion density; and solar wind dynamic pressure.Data are lagged in time by 3 min in order to show conditions at the magnetopause as a function of UT.
Figure 4 .
Figure 4.An overview of ground-based data used in this study.(a) Line-of-sight ionospheric Doppler velocity measured by the SuperDARN Iceland East radar; (b) average line-of-sight velocity extracted from a subset of radar range gates; (c) an inverse gray scale keogram of auroral activity extracted from the magnetic meridian passing through the Tjörnes Rainbow ASI; (d) unfiltered H (black) and D (red) component ground magnetometer measurements; (e) H (black) and D (red) component ground magnetometer measurements band-pass filtered to reveal pulsations in the Ps6 band (4-40 min periods); (f ) H component ground magnetometer measurements band-pass filtered to reveal pulsations in the Pi2 band (40-150 s periods); (g) equivalent current vectors derived from ground magnetometer data; (h) variations in the (provisional) AE index; and (i) variations in the (provisional) AU and AL indices.Auroral omega bands discussed in the text are labeled i-v.The timings of ASI frames presented in Figure 5 are labeled a-l.
Figure 5 .
Figure 5. All-sky images recorded by the Tjörnes Rainbow imager during the passage of auroral omega bands.The images are projected onto a magnetic latitude/magnetic local time grid at an altitude of 110 km.(a) The dotted vertical line corresponds to the 0000 MLT meridian, with other MLT meridians indicated at 1 h intervals.In Figures 5b-5l, the ASI remains at the center, and these grid lines move owing to the advancing universal time.The curved dotted lines indicate the 70°N and 60°N parallels of magnetic latitude.Projected at an emission altitude of 110 km, the edge of the circular field of view (10°above the local horizon at the ASI site) corresponds to a ground range of approximately 500 km from the ASI.The magnetic footprints at 110 km of the four Cluster satellites are also overlaid, color-coded as in Figure 2. Auroral omega bands discussed in the text are labeled i-v.
Figure 6
Figure 6 compared mapped (ionospheric) peak Poynting flux with electron energy flux estimated from ultraviolet auroral images.They concluded that Alfvénic Poynting flux in the midtail region (4-7 R E ) is associated with and capable of powering localized regions of magnetically conjugate auroral emissions.Recent modeling work by Watt and Rankin [2010] indicated that in
Figure 9 .
Figure 9. Hodograms showing electric and magnetic field perturbations in the plane perpendicular to the magnetic field at the Cluster 3 satellite at (top) ∼0015 UT and (bottom) ∼0040 UT. (left) At each time, the hodogram shows the relationship between the dB l and dE m perturbations; (right) the hodogram shows the relationship between the dE l and dB m perturbations.The exact time range of each pair is indicated on the left and the starting position is shown by a shaded gray circle in each panel.Arrowed vectors join adjacent measurements (recorded at 4 s resolution). | 16,952.6 | 2011-03-18T00:00:00.000 | [
"Physics"
] |
ANIMATED 3D GRAPHICS AS VISUAL BRAND COMMUNICATION ON UKRAINIAN TELEVISION
Received 26 April 2019 Accepted 24 June 2019 Published 30 June 2019 The research of the use of 3D computer graphics in commercials is conducted based on monitoring of the commercial breaks on TV channels «1+1», «Inter» and «Ukraine» in the years 2015-2017. The TV channels for the research were chosen according to their performance and results of the international and Ukrainian festivals of advertising and TV design, as well as to the percentage of usage of 3D computer graphics in the winning projects. There were 27 round-the-clock monitoring sessions conducted. More than 17 000 commercials were defined and classified based on 3D computer graphics usage. The conclusions concerning the urgency of using three-dimensional animation as the means of visual brand communication and the quality level of the 3D computer graphics in the TV commercials are determined herewith. KEYWORDS
Introduction. Dynamic 3D graphics in broadcasting and TV commercials appears to be well developing in the sphere of visual communications. Visuals receive more and more complicated post processing when elements of 2D and 3D graphics are added to pictures filmed from nature. TV channels regularly break programming block in order to promote outside brands while direct commercials and also for their own media brand identification, while promo videos and advanced advertising. Channel's own brand lets it to create its recognizable image despite changes of faces and channel programs, to build the right strategy of positioning and promoting its own products (projects, programs, films) during the most suitable time for target audience watching depending on the network of the other competing channels. Media product, in its turn, is gaining new forms and powerful audio and visual options for TV airtime. Animated 3D graphics as a part of animation is one of the most common features in media design. 3D elements are directly used while creating an aesthetic appearance of the air, making navigation system for the viewer to have a guidance in broadcasting space and also in direct advertising. Composing most of the visual content of the television it is still left as understudied. «Throughout the past decade the usage of three dimensional elements has significantly risen and there are no evidences that this is just a temporary event» [1, p.122].
M. Murashko among other Ukrainian scientists developed a subject matter of animated graphics in her PhD work in 2017 «Project and art tools of motion design (based on commercial sample)» [3]. Design and projecting in television environment was studied by М. Marchenko and А. Yarmolenko, by considering peculiarities of visual communication projecting. In their article «Design and projecting in television environment» authors state that TV designer takes directly part in creating brand of broadcasting units [2, p.481]. Such American scientists as Gustavo E. [7] and Lovera C. [8] devote their works to the graphic design updated principles used in television environment as a form of branding. Leigh Hunt as one of the MANAGEMENT AND MARKETING 4(16), June 2019 foreign specialists studied TV branding and air promotion. His work of 2001 summarized experiences of traditional, cable and sputnik TV in the USA during the past 20 years and concluded the following «Our image as a visual expression of your values, defines outer appearance, voice and actions» [3, p.27].
Another profound work we have taken is «Branding TV: principles and practices» by oxford scientists W. McDowell and A. Batten [9]. Strategic brand communication campaigns were also revealed by Beth E. Barnes and Don E. Schultz [5]. Problems of using new editing technologies, digital effects and creating graphic television environment with them in the field of TV design were investigated by such modern theorists and research and practical workers as W. Weibel, І. Sazerland, Р. Lowton, L. Dorfsmann. T. Dwyer, an American scientist, made also his input into studies of experiences of media communication and multimedia and originated basic knowledge of media convergence in his book in 2010, where he partially investigated multimedia design as well [6]. P.Pavlou and D. Stewart generalized new terms of marketing existing in the new millennium in their work «Interactive advertising: new conceptual basis of marketing combination elements integration» in 2014 [10]. In the frames of set scientific goals we studied works on 3D graphics and animation. Analysis of animation principles of investigated advertisement was made under the influence of works by P. Blair, N. Brown, М. Wiberg, R. Williams, C. Hart, А. Karlberg, Ya. Kemnitz, N. Kryvulia, and І. Kuznetsov.
Animated 3D graphics (with the main difference from two dimensional graphics is the ability to project not solely the length or width but also depth) has replaced classic animated methods and shows more reality that makes it easy to convey the message and the aim of brand messages itself. The graphics is used to fully capture audience attention creating effects that had not been possible in the past.
Hardware and software development, video art and graphics improvement, on the whole, make a great influence on visual communication methods, changing old methods of projecting and creating new opportunities. On the other side it caused new problems and challenges. Designing TV airtime and advertisement has significantly amended not just due to new technologies but also due to scientific intelligence. Marketing and branding researches gain quick development as well as studies of TV design and visual communication. To be able to use animated 3D graphics more effective as a component of branding, we shall research its functional features as a method of visual communication since it had not been done before.
Specifying objectives and methods. Objectivesto research peculiarities of animated 3D graphics functions as a method of visual communication of TV brands based on direct advertisement and promo videos on Ukrainian channels. In order to reach the goal set in this study we raised an objective to monitor usage of animated 3D graphics in designing interim units between programs specified according to TV design quality ranking of the following Ukrainian channels: «1+1», «Inter», and «Ukraine» during 2015-2017.
Object of studyanimated 3D graphics in direct advertisement and promo videos on TV.
Chosen channels differ with their brand ideological visions but their analysis will help us to trace possible interrelations of 3D graphics usage according to corresponding all-around functions.
Elements of airtime designing in inter-programs units that use 3D graphics are investigated, namely: idents, promo videos, advanced advertising. Advertisement and airtime design made by mentioned methods are considered as a part of visual concept of brands. Graphics in the process of programs' designing (such as captions, titles etc.) are not analyzed in the context of this study.
Subject of study. Function features and usage of animated 3D graphics functions as a method of visual communication of TV brands.
3D graphics is analyzed here as a component of media design that is an instrument of branding. Methods of study. The work is based on an integrated approach that allows studying 3D graphics both in advertisement and promo videos. We used methods of classification, comparative and systematic methods, analysis of documentary information such as content monitoring, expert questionnaire and method of formal and figuratively stylistic analysis. Graphic and analytical method was used for visualizing study findings.
Research results. Level of Ukrainian TV design in the frames of world development is quite stable. Animated 3D graphics is well known in the industry and is developing quite fast. Visual design becomes a great advantage in the industry, makes product quality better and with more profit, TV channels consider broadcast design as a method of branding. For defining a general level of an airtime design on Ukrainian channels we reviewed their nominations at eight international TV design contests and festivals and composed an according ranking.
Having analyzed 12 winning promo videos of 4 Ukrainian TV channels at «Promax/BDA» festival, we came to a conclusion that animated 3D graphics is commonly used in the most videos of Ukrainian creators who were considered as world's best (9 examples which is 75%). Partially this statistic may seem that according to jury's opinion videos that use 3D graphics are more qualified. We also specified that some channels use this type of graphics almost in every researched video of theirs («ICTV», «Ukraine» -100%), some channels prefer partially usage of three dimensional rights in designing air time («1+1» -66,6%, «Іnter» -33,3%).
Thus, we systemized successes of Ukrainian channels in the field of TV design that were honored both on the national and international levels. Level of quality of designing an airtime was showed in the light of jury's marks. Moreover after looking through the list of winners of 8 international festivals («PROMAX/BDA», «New York Festivals International Television», «Bassawards», «The one club of creativity», «The Motion Awards», «D&AD Professional Awards», «Filmteractive», «Epica Awards»), we noted that Ukrainian channels took an active part in the first two festivals mentioned above. It was concluded that the best broadcast design belonged to «1+1» and «Ukraine» channels, which, in general, won international TV design and promotion contests for about 5 times, «Inter» and «ICTV» channels -4 times. Alongside to this it was observed that in 12 winning Ukrainian projects 3D graphics was used no less than in 75 % of examples. But while analyzing 50 winning works of years 2016-2017 that belonged to the world's best TV companies, it was demonstrated that an interest to 3D graphics had been withered contrary to «flat design» or motion graphics style on the whole that partially are partially used by 3D animation. An international Promax BDA prize is considered as one of the most prestigious by the specialists in this field, awarding it to Ukrainian channels means their remarkable achievements in media marketing, taking into account all the parts of the projects starting from promotion and design and finishing with branding and audience interaction. What also should be mentioned is a low activity of national TV brands in other festivals and contests of the industry. According to the results of taking winning places in 6 all-Ukrainian festivals (Teletriumph, Ukrainian Design: The Very Best Of, Kyiv International Advertising Festival, KAKADU Awards, ADC*UA Awards, RED APPLE*UA), we have defined eight Ukrainian channels that were mentioned by experts as some of the best in the field of promotion and TV design in Ukraine in 2001 -2016. Leaders among Ukrainian expert judgment are «ICTV», «1+1», «STB». For 15 years in the frames of Teletriumph festival 3D graphics was used by Ukrainian channels not less than in 59 % of 32 winning works.
Trends are already seen though are very limited on this stage. Analyzing victories and frequency of using 3D graphics in works of TV design allowed illustrating success of Ukrainian TV industry in visual communication, branding and promotion in general. The research is the beginning of generalizing Ukrainian TV design and has to be further developed. The review let us to define channels for further monitoring and partially summarize the development level of Ukrainian TV design.
To investigate ranges of using animated 3D graphics as a method of visual communications in advertisement, we monitored TV channels for presence of advertisement with 3D elements comparing to advertisement without 3D. We made records of TV airtime per day every quarter in the period from 1.03.2015 to 1.03.2017 of three Ukrainian channels: «1+1», «Inter», and «Ukraine». Thusly, during two years, we conducted 8 monitors of each channel, that stands for 27 around the clock monitors of interprogram block, specified and classified more than 17000 video works that is more than 1500 video hours.
27 media plans of direct advertisement release and TV programs previews were set up, we divided advertisements according to their type, advertised brand and its product, 3D graphics presence.
Monitoring stated that mostly it was used in direct advertisement and much less percentage have advanced advertising, promos, or idents (in average 85% till 15%). While collecting data, we observed that for audience communication channels give around 30 % of all inter-program block during 24 hours and less than 15 minutes of airtime in general. We made 27 detailed reports with 3D usage statistics in accordance with categories, types, brands, etc. and 8 general reports that allowed defining average figures of all channels during the research period. Thus we saw a positive dynamics of 3D graphics usage in every category. The research has development potential, inasmuch it is necessary to analyze other Ukrainian channels as well and to investigate connection with channel's rating, considering the fact that we already chose channels that have best criteria in TV design and that are leaders in viewer rating.
We may see from the general analysis of three channels in a three year period that 3D graphics was used in 53% of videos (in 9159 from 17201). In addition to that we observed a small increase in 2016 by 3% and decrease 2017 by around by 8% in comparison to 2015. According to a recapitulative statistics of direct advertisement we define regressive dynamics by 5% (from 56% to 51%). Advanced advertisements and promos actively used 3D graphics in 2016, in 2017 only 33% of video works had it that is less by 11% comparing to a starting point of 45% in 2015.
It is interesting to compare usage of 3D graphics by chosen channels during three year term. Mostly it is used by «Ukraine» channel (61%) that proves 3D graphics usage in 53% of videos (9159 videos from 17201). Leader in using 3D graphics in direct advertising is «Inter» channel (58%). «Ukraine» channel used 3D in advanced advertising and promos most commonly (81%). Conclusions. Scientific novelty of the received results lies in the fact that this study has first time: -investigated animated 3D graphics/ systemized parameters of 3D graphics as a method of visual communication, design, and branding in commercials and promo videos on Ukrainian channels; -generalized experience of using animated 3D graphics by Ukrainian media brands; -proved an importance of using 3D graphics elements in TV design nowadays; -monitored animated 3D graphics usage in designing inter-program blocks of «1+1», «Inter», «Ukraine» channels during 2015-2017 (around 20 thousands videos are analyzed, statistical classification between chosen channels, types, brands, products, video length that use 3D is conducted); -established that animated 3D graphics in airtime design has primarily image function and then informative; -separated animated 3D graphics as a part of 3D animation that is one of the directions of multimedia design; -systemized based on conducted monitoring, professional, creative and technological achievements in the field of promotion and design of Ukrainian TV brands; 4(16), June 2019 further developed: -knowledge on history development of 3D elements of advertising and TV design in the early ХХІ century; -structured research vocabulary; specified a definition of term «Animated 3D graphics» and set its indicators; -image of media design as a system element in image creating and visual communication of brands. | 3,435.6 | 2019-06-30T00:00:00.000 | [
"Business",
"Computer Science"
] |
Enhanced acyl-CoA:cholesterol acyltransferase activity increases cholesterol levels on the lipid droplet surface and impairs adipocyte function
Cholesterol plays essential structural and signaling roles in mammalian cells, but too much cholesterol can cause cytotoxicity. Acyl-CoA:cholesterol acyltransferases 1 and 2 (ACAT1/2) convert cholesterol into its storage form, cholesteryl esters, regulating a key step in cellular cholesterol homeostasis. Adipose tissue can store >50% of whole-body cholesterol. Interestingly, however, almost no ACAT activity is present in adipose tissue, and most adipose cholesterol is stored in its free form. We therefore hypothesized that increased cholesterol esterification may have detrimental effects on adipose tissue function. Here, using several approaches, including protein overexpression, quantitative RT-PCR, immunofluorescence, and various biochemical assays, we found that ACAT1 expression is significantly increased in the adipose tissue of the ob/ob mice. We further demonstrated that ACAT1/2 overexpression partially inhibited the differentiation of 3T3-L1 preadipocytes. In mature adipocytes, increased ACAT activity reduced the size of lipid droplets (LDs) and inhibited lipolysis and insulin signaling. Paradoxically, the amount of free cholesterol increased on the surface of LDs in ACAT1/2-overexpressing adipocytes, accompanied by increased LD localization of caveolin-1. Moreover, cholesterol depletion in adipocytes by treating the cells with cholesterol-deficient media or β-cyclodextrins induced changes in cholesterol distribution that were similar to those caused by ACAT1/2 overexpression. Our results suggest that ACAT1/2 overexpression increases the level of free cholesterol on the LD surface, thereby impeding adipocyte function. These findings provide detailed insights into the role of free cholesterol in LD and adipocyte function and suggest that ACAT inhibitors have potential utility for managing disorders associated with extreme obesity.
Free cholesterol is a key component of mammalian cell membranes and plays essential structural and signaling roles in mammalian cells. However, too much free cholesterol can cause cellular toxicity (1). Thus, an elaborate system exists to maintain cellular cholesterol homeostasis. Key to this regulation is the SREBP/SCAP machinery, which senses excess cellular free cholesterol and suppresses the expression of the enzymes for cholesterol synthesis, including HMGCR (3-hydroxy-3-methyl-glutaryl-CoA reductase), as well as the lowdensity lipoprotein receptor for cholesterol uptake (2)(3)(4)(5). Excess cholesterol can also trigger the degradation of enzymes for cholesterol synthesis (6,7). Another crucial biochemical reaction that maintains a proper level of free cholesterol in mammalian cells is cholesterol esterification, which is the synthesis of cholesteryl esters through the formation of an ester linkage between the 3-OH moiety in free cholesterol and the carboxyl group of a long chain fatty acyl CoA. This reaction is mediated by the enzyme acyl-CoA:cholesterol acyltransferase (ACAT) 3 (also known as sterol O-acyltransferase, or SOAT) (8). When in excess, free cholesterol is converted into cholesteryl esters and stored in lipid droplets (LDs), which are dynamic cellular organelles comprising a neutral lipid core enclosed by a phospholipid monolayer (9,10). Triacylglycerols (TAGs) and cholesteryl esters are two major storage neutral lipids that make up the core of LDs (8,11). Also, cholesteryl esters can be packaged as part of the neutral lipid core within plasma lipoproteins for lipid transport purposes (12).
There are two acyl-CoA: cholesterol acyltransferases (ACAT1 and ACAT2) in mammals, encoded by the ACAT1 and ACAT2 genes, respectively. ACAT1 and ACAT2 are closely related enzymes that are responsible for all intracellular cholesterol esterification (13)(14)(15). Both enzymes are membrane-spanning proteins located in the endoplasmic reticulum (ER) (8). ACAT1 is ubiquitously expressed in many cell types, whereas ACAT2 is only expressed in the cells secreting apolipoprotein Bcontaining lipoproteins, such as the liver and intestine (16 -18). It has been shown that cholesterol esterification catalyzed by ACAT2 is essential to cholesterol absorption in the intestine (19). Adipose tissue is the major storage site of excess calories in the form of TAGs in mammals (20). It also contains the largest pool of free cholesterol in the body (21,22). In obese subjects, over 50% of total body cholesterol resides in the adipose tissue (21). We have recently demonstrated a major contribution of adipose tissue to the storage of whole-body cholesterol (23). Because of a lack of ACAT enzyme activity in white adipose tissue, nearly all cholesterol (Ͼ 95%) exists in its free form (24). Most free cholesterol in adipocytes is believed to localize to the plasma membrane, where there are abundant caveolae, small flask-shaped invaginations that are highly enriched in free cholesterol (25,26). The activity of de novo cholesterol biosynthesis in adipocytes is fairly low (21). Therefore, much of the adipocyte cholesterol is obtained through the uptake of circulating lipoproteins.
Given that a large quantity of cholesterol is stored in adipose tissue, the absence of ACAT expression and activity in adipose tissue is intriguing. As mentioned above, excess free cholesterol is toxic, and the best way to handle excess cholesterol is to convert it into cholesteryl esters, which can be stored in the core of LDs. What are the benefits, then, for adipocytes to maintain low ACAT expression and activity? In this study, we first detected significantly increased level of ACAT1 in the adipose tissue/adipocytes of ob/ob mice. We further investigated the effects of increased cholesterol esterification on adipocyte function and LD dynamics by overexpressing ACAT1/2 in preand mature adipocytes. Our data demonstrated that increased ACAT activity partially blocked adipogenesis, caused dramatic enrichment of free cholesterol on the LD surface, and impaired key functions of adipocytes including lipolysis and insulin signaling.
ACAT1/2 overexpression in 3T3-L1 stable cell lines
Because normal adipose tissue maintains very low ACAT expression and activity, we suspect that increased ACAT expression/activity may exert adverse effects on adipose function and may also be associated with dysfunctional adipose tissue. Indeed, we found that the protein level of ACAT1 was ϳ7-fold higher in the adipose tissue of the ob/ob mice than that of the WT mice (Fig. 1, A and B). Given that ob/ob adipose tissue has very high levels of macrophages and that macrophages are known to express ACAT1 (27), the detected ACAT1 on whole adipose tissue could be derived from adipose tissue macrophages. We therefore isolated adipocytes: ACAT1 was detected in isolated adipocytes from the ob/ob mice but not WT mice (Fig. S1A). By contrast, the macrophage marker CD11b was detected only in the stromal vascular fraction, but not in isolated adipocytes. Thus, the increase in adipose tissue ACAT1 is at least in part due to increased ACAT1 in adipocytes of the ob/ob mice. Thus, it is important to determine whether and how increased ACAT activity may impact adipocyte function. For this purpose, we generated stable cell lines (3T3-L1) overexpressing FLAG-tagged ACAT1/2, an ACAT2 stabilizing mutant (ACAT2-C277A) (28), and their catalytic dead mutants (ACAT1-H460A, ACAT2-H360A, and ACAT2-H360A-C277A). For both WT and mutants of ACAT1 and ACAT2, the mRNA levels of ACAT1 or ACAT2 were increased by at least 3 orders of magnitude above endogenous levels in the empty vector (EV) control group (Fig. 1C). Accordingly, protein levels of ACAT1 and ACAT2 were also much higher in these overexpressing cell lines (Fig. 1, D and E), with the level of the ACAT2 stable mutant, ACAT2-C277A, ϳ5-fold higher than the WT (Fig. S1, B and C). In both 3T3-L1 preadipocytes (Fig. 1F) and differentiated 3T3-L1 adipocytes (Fig. 1G), total cholesteryl esters were significantly higher than that of control/EV cells when overexpressing ACAT1, ACAT2, and ACAT2-C277A, but not the catalytic dead mutants. The proportion of total cholesterol that was converted to cholesteryl esters was shown in Fig. 1 (H and I). Finally, the enzymatic activity of the ACATs was determined by a cholesterol esterification assay in which those stable cells were pulse-labeled with [ 14 C]oleic acid conjugated to BSA. Although WT ACAT1/2 and the ACAT2 stabilizing mutant (ACAT2-C277A) demonstrated significantly higher activities, the catalytic dead mutants appeared to be inactive (Fig. 1, J and K). Therefore, our 3T3-L1 stable cell lines were shown to overexpress functional ACAT1/2 with the catalytic dead mutants serving as useful controls.
Increased ACAT activity impairs the differentiation of 3T3-L1 preadipocytes
The effects of ACAT1/2 on adipogenesis were investigated by differentiating 3T3-L1 preadipocytes into mature adipocytes using our stable cell lines. The mRNA levels of adipose-specific genes, such as aP2, PPAR␥, and C/EBP␣, are up-regulated in normal adipocytes, whereas Prefl is mainly expressed in preadipocytes (29 -31). Thus, the expression of aP2, PPAR␥, and C/EBP␣ was induced during differentiation (day 8 versus day 0), whereas that of Pref1 was down-regulated ( Fig. 2A). However, after 8 days of differentiation, overexpression of functional ACAT1/2 blunted the changes in these differentiation markers. Overexpression of ACAT2 and ACAT2-C277A decreased the extent of differentiation by ϳ70% (Fig. 2, A and B), whereas the catalytic dead mutants had little or much weaker effects on differentiation (Fig. 2C). These results were reflected by reduced triglyceride accumulation as assessed by Oil Red O staining (Fig. 2D), with catalytic dead mutants serving as controls (Fig. 2E). Together, these data suggest that ACAT expression negatively impacts the differentiation of 3T3-L1 cells.
ACAT1/2 overexpression impairs lipid droplet morphology in adipocytes
Lipid droplets form during adipocyte differentiation/adipogenesis. We next examined the morphology of lipid droplets in mature adipocytes. ACAT1/2 overexpression, especially ACAT2, substantially reduced the LD size in adipocytes on day 8 after differentiation (Fig. 3, A and C, and Fig. S2A). The size of LDs also decreased in adipocytes on day 0 and 2, as well as in HeLa cell lines upon ACAT1/2 overexpression (Fig. S2, B-D). Catalytic dead mutants of ACAT1 and ACAT2 did not affect the LD size (Fig. 3, B and C). To exclude the possibility that the impaired LD size was related to the differentiation status, we additionally investigated the lipid droplet growth in mature adipocytes transduced by lentivirus carrying the ACAT1 or Cholesterol esterification in adipocytes Figure 1. Characterization of ACAT1/2 overexpression in 3T3-L1 stable cells. A, the protein level of ACAT1 in adipose tissue of WT and ob/ob mice (three mice for each genotype, male, 18 -20 weeks) as detected by Western blotting analyses. B, quantification of ACAT1 levels in A. C, the mRNA levels of ACAT1, ACAT2, and related mutants stably expressed in 3T3-L1 adipocytes. D and E, the protein levels of ACAT1, ACAT2, and related mutants stably expressed in 3T3-L1 adipocytes. OE, overexpression. F and G, the levels of cholesteryl esters (CE) in preadipocytes and adipocytes. H and I, the proportion of esterified cholesterol over total cholesterol. J and K, cholesterol esterification by ACAT1 and ACAT2 in preadipocytes. Nonspecific lipids (*) were used as references for normalization. 3T3-L1 preadipocytes were transduced with pBABE-puro EV or FLAG-tagged ACAT1/2 or ACAT1/2 catalytic dead mutants followed by the selection of puromycin to generate stable cell lines. Two-tailed Student's t test was used (means Ϯ S.D.; n ϭ 3). *, p Ͻ 0.05; **, p Ͻ 0.01; ****, p Ͻ 0.0001; ns, no significance.
Cholesterol esterification in adipocytes
ACAT2 genes, respectively (Fig. 3, D and E), and obtained consistent results that the presence of ACATs in mature adipocytes, especially ACAT2, reduced LD growth (Fig. 3, F and G). Importantly, this effect can be reversed by inhibiting ACAT1 with Sandoz 58-035 and ACAT2 with pyripyropene A (PPPA), respectively (Fig. 3, H and I).
Cholesterol esterification in adipocytes
CIDEC/Fsp27 is a well-known LD-associated protein. Although very low in preadipocytes, the expression of CIDEC is highly induced during differentiation to regulate lipid droplet dynamics in adipocytes (32)(33)(34). Hence, we examined the expression of CIDEC in mature adipocytes when overexpressing ACAT1/2. Overexpression of ACAT1/2 dramatically decreased the CIDEC expression in total cell lysates and LD fractions in mature adipocytes (Fig. 3, J and K), which implied Cholesterol esterification in adipocytes that increased ACAT activity in adipocytes may influence LDassociated proteins and disturb LD dynamics.
LDs are composed of a core of neutral lipids, including cholesteryl esters produced by ACAT1/2 and TAGs produced by DGAT1 and DGAT2 (11,35,36). The impaired LD size when overexpressing ACATs may also be related to TAG synthesis. Therefore, we examined TAG synthesis in adipocytes in the presence of ACAT1/2. The mRNA level of Dgat1 or Dgat2 was significantly reduced on day 8 of differentiation ( Fig. 4A), with concomitant dramatically reduced TAG production (Fig. 4B). Also, we found consistent results in mature adipocytes transiently overexpressing ACAT1 or ACAT2 (Fig. 4, C-E). Taken together, these data suggest that ACAT1/2 overexpression impairs LD expansion in adipocytes, possibly because of less LD fusion mediated by CIDEC and down-regulated TAG synthesis.
ACAT1/2 overexpression impairs lipolysis and insulin signaling in adipocytes
The dramatic changes in LD morphology in adipocytes upon ACAT1/2 overexpression may impact adipocyte functions. First, we investigated lipolysis in mature adipocytes transiently overexpressing ACAT1/2. ACAT1/2 overexpression almost abolished hormone-stimulated lipolysis in mature adipocytes (Fig. 5A). Perilipin is a key regulator for lipolysis and regulates lipases such as hormone-sensitive lipase (HSL) and adipose triglyceride lipase (ATGL) (37,38). Phosphorylated forms of HSL were clearly reduced in mature adipocytes transiently overexpressing ACAT1/2 (Fig. 5B). In addition to perilipin 1, perilipin 2 expression was also significantly reduced in mature adipocytes upon overexpressing ACAT2 ( Insulin triggers the uptake of glucose, fatty acids, and amino acids in the liver, adipose tissue, and muscle and promotes the storage of these nutrients in the form of glycogen, lipids, and protein, respectively (39,40). Protein kinase B (AKT), a downstream target of insulin, is one of the important regulators during this process. ACAT1/2 overexpression substantially decreased the level of p-AKT (S473), with ACAT2 overexpression displaying a stronger effect (Fig. 5, G and H). All these results strongly indicate that increased ACAT activity severely disrupts the metabolic functions of mature adipocytes. Immunofluorescence was carried out with anti-FLAG primary antibody on cells expressing FLAG-tagged ACAT1/2, but not on EV control cells. LDs were stained using BODIPY. Bars, 10 m. B, the effect of overexpressing catalytic dead mutants of ACAT1/2 on LD size on day 8 of differentiation. LDs were stained using BODIPY. Bars, 10 m. C, quantification of LD sizes in adipocytes stably expressing ACAT1/2. LDs from ϳ15 cells/cell type were used. D, protein level of ACAT1 in mature adipocytes transiently overexpressing ACAT1. E, protein level of ACAT2 in mature adipocytes transiently overexpressing ACAT2. F, the effect of transient overexpression of ACAT1/2 on LD size in mature adipocytes. Immunofluorescence was carried out with anti-FLAG primary antibody on cells expressing FLAG-tagged ACAT1/2, but not on EV control cells. LDs were stained using LipidTOX. Bars, 10 m. G, quantification of LD sizes in mature adipocytes overexpressing ACAT1/2. LDs from ϳ15 cells/cell type were used. H, the effects of ACAT1 and ACAT2 inhibitors on LD size in mature adipocytes. Bars, 10 m. LDs were stained by BODIPY. Sandoz 58-035 (ACAT1 inhibitor) and PPPA (ACAT2 inhibitor) were dissolved in DMSO. 10 M of inhibitor was added to respective cells for 10 h. I, quantification of diameters of 2 largest LDs in each cell type as shown in H. Two-tailed Student's t test was used (means Ϯ S.E.; n ϭ 20 LDs from 10 cells for each cell type). **, p Ͻ 0.01; ***, p Ͻ 0.01; ns, no significance. J, immunoblot (IB) analysis of CIDEC in cell lysates (CL) and LD fractions isolated from mature adipocytes transiently overexpressing ACAT1/2. K, quantification of CIDEC level in J. Two-tailed Student's t test was used (means Ϯ S.D.; n ϭ 3). *, p Ͻ 0.05; **, p Ͻ 0.01.
Cholesterol esterification in adipocytes ACAT1/2 overexpression promotes free cholesterol accumulation on lipid droplets in adipocytes
Next, we wanted to understand the molecular basis underlying the impact of ACAT expression on adipocytes. We focused on cholesterol, the substrate of ACAT enzymes and a key component of mammalian membranes. Strikingly, as revealed by filipin staining, there was appreciable free cholesterol accumulating on the LD surface instead of the plasma membrane when overexpressing ACAT1/2 in adipocytes on day 8 after differentiation, and ACAT2 overexpression showed stronger effects (Fig. 6, A and B). Moreover, this phenotype was also seen in mature adipocytes transiently overexpressing ACAT1 or ACAT2, respectively (Fig. 6, C and D). The biosynthesis of cholesterol is sensitive to the level of free sterols, the feedback of which elaborately regulates the endogenous synthesis and exogenous uptake of cholesterol. ACAT1/2 in adipocytes consumes free cholesterol, which may reduce the cholesterol sens-ing "pool" in the ER, thereby triggering the ER to keep producing cholesterol, leading to free cholesterol accumulation on lipid droplets. To test this hypothesis, two other approaches of depleting cholesterol were carried out in mature adipocytes: using hydroxypropyl--cyclodextrin (HPCD) and/or starvation medium (DMEM ϩ 1% lipoprotein-deficient serum (LPDS)). The depletion of cholesterol enhanced the accumulation of free cholesterol on the LD surface (Fig. 7). Striking accumulation of free cholesterol on the LD surface could be observed even in control cells when depleting cholesterol using HPCD (Fig. 7, A and C) or shutting down cholesterol uptake (LPDS medium) (Fig. 7, B and C).
Cholesterol esterification in adipocytes
in the presence of ACAT1/2 was increased either in total cell lysates or LD fractions isolated from mature adipocytes (Fig. 8, B and C). In summary, our results showed that the overexpression of ACAT1/2 in 3T3-L1 adipocytes perturbs cholesterol transport and localization, promoting cholesterol accumulation on the LD surface.
ACAT1/2 overexpression impairs cholesterol homeostasis in adipocytes
We next sought to investigate cholesterol status and homeostasis in mature adipocytes when ACAT1/2 are overexpressed. The free cholesterol level was significantly increased in mature adipocytes that were differentiated from 3T3-L1 preadipocytes stably overexpressing ACAT1/2 (Fig. 9A), as well as mature adipocytes transiently overexpressing ACAT1/2 (Fig. 9C) compared with control cells. Accordingly, increased free cholesterol was associated with the down-regulation of Hmgcr (Fig. 9, B and D), an SREBP-target gene that is sensitive to changes in cellular cholesterol status. Consistent results were also found in preadipocytes stably overexpressing ACAT1/2 (Fig. 9, E and F). Although overexpression of WT ACAT1/2 or ACAT2 stable mutant increased free cholesterol levels, the catalytic dead mutants of ACAT1/2 had no effects (Fig. 9, E and F). Furthermore, the increased free cholesterol could be reversed through inhibiting HMGCR activity by statin treatment (Fig. 9, G and H). Together, these results indicate that ACAT overexpression perturbs cholesterol homeostasis in adipocytes.
Discussion
Adipose tissue can store more than 50% of whole-body cholesterol. Our previous study demonstrated that mice deficient in both LDLR and adipose tissue had ϳ5-fold higher plasma cholesterol than Ldlr Ϫ/Ϫ mice (23). Thus, adipose tissue is just as important as the LDLR pathway for whole body cholesterol homeostasis. Cholesterol is usually stored in cells in the form of cholesterol esters through the actions of ACAT1/2. However, ACAT1/2 expression and activity are extremely low in normal adipocytes, which is highly intriguing. We hypothesize that increased ACAT expression/activity may exert adverse effects on adipose function and may also be associated with dysfunctional adipose tissue. Indeed, we show here for the first time that ACAT1 is dramatically up-regulated in the adipose tissue and adipocytes of the ob/ob mice (Fig. 1, A and B, and Fig. S1A). Thus, it is important to determine whether and how increased ACAT activity may negatively impact adipocyte function. In this work, we demonstrated that the overexpression of ACAT1/2 impaired adipogenesis and key metabolic functions of mature adipocytes. We further demonstrate that overexpressing ACAT1/2 caused striking accumulation of free cholesterol on the surface of LDs, as well as an overall increase in
Cholesterol esterification in adipocytes
utes to adipocyte function, because ACAT inhibition and ACAT deficiency have been reported to suppress lipogenesis and reduce the intracellular cholesterol pool (46). Here we found that ACAT overexpression reduced TAG synthesis and yet increased intracellular cholesterol, thus transgressing the strong correlation usually observed between adipocyte cholesterol content and TAG load (22). ACAT overexpression led to smaller LDs (Fig. 3, C and G), and hence too much ACAT could be expected to inhibit the formation of the adipocyte's characteristic unilocular LDs, which helps to maximize energy storage in minimal space.
ACAT2 appeared to display stronger effects than that of ACAT1, which may help to explain its far more restricted tissue expression. ACAT2 expression is mostly limited to the intestine and liver where it helps to package cholesteryl esters into apolipoprotein B-containing lipoproteins (47)(48)(49). Accordingly, endogenous ACAT2 was down-regulated during adipocyte differentiation unlike ACAT1 (Fig. S3 and Table S4). Moreover, we did not detect any difference in the protein level of ACAT2 between the adipose tissue of normal and ob/ob mice (data not shown), further confirming ACAT2's tissue specificity.
Cys 277 of ACAT2 can be ubiquitinated for degradation when the level of certain lipids is low. Free cholesterol and saturated free fatty acids can protect the protein from degradation
Cholesterol esterification in adipocytes
through inducing cellular reactive oxygen species to oxidize Cys 277 (28). This may be another reason for the absence of ACAT2 in adipose tissue that stores the most lipids, including free cholesterol and free fatty acids. Cysteine ubiquitination of ACAT2 is believed to be an important mechanism of maintaining lipid homeostasis by sensing lipid overload. A stable mutant of ACAT2 (C277A) caused higher insulin sensitivity in the liver of Acat2 KO mice (28). In addition to ACAT2, the stability of HMGCR is also sensitively regulated by lipid levels (6). The sterol-induced degradation of both ACAT2 and HMGCR is accomplished through the same gp78 -Insigs complex. Gp78 is a membrane-anchored ubiquitin ligase and associates with Insig1/2. Both ACAT2 and HMGCR are substrates of gp78 (50 -53). When the cellular cholesterol is low, the gp78 -Insig complex would release and stabilize HMGCR to increase cholesterol synthesis and, at the same time, ubiquitinate Cys 277 of ACAT2 to reduce the esterification. This may be related to the finding in Fig. 9. In ACAT2-overexpressing cells, the continuous consumption of free cholesterol may inhibit HMGCR deg-radation and increase cholesterol synthesis, which in time reversibly inhibits Hmgcr transcription. ACAT1/2 expression in adipocytes perturbed cholesterol homeostasis and resulted in an aberrant accumulation of free cholesterol on lipid droplets (Figs. 6 and 9). Cholesterol is synthesized in the ER and quickly transferred to other organelles, especially the plasma membrane. However, the molecular mechanism of this transport is still elusive despite some discoveries (54 -56). Our lab recently demonstrated that ORP2 is a coexchanger of cholesterol and PI(4,5)P 2 on the plasma membrane (57). Also, ORP2 might regulate cholesterol on the LDs (58). This implies ORP2 might be involved in the cholesterol transport in adipocytes when overexpressing ACAT1/2, which may be tested in future studies. In addition, cholesterol depletion by methyl--cyclodextrin (similar to HPCD used in this study) increased caveolin-1 content in lipid droplets, and Caveolin-1 KO mice showed reduced free cholesterol level on LDs (43). The association of caveolin-1 and cholesterol in adipocyte lipid droplets implies that caveolin-1 may play an impor-
Cholesterol esterification in adipocytes
tant role in the aberrant accumulation of free cholesterol on lipid droplets when expressing ACAT1/2 (Fig. 8). Indeed, the redistribution of caveolin-1 to LDs has been proposed to divert free cholesterol from other intracellular pools (43). Thus, overexpressing ACAT1/2 may limit cholesterol availability for proper caveolae function at the plasma membrane, causing adipocyte dysfunction.
Perilipin and CIDEC/Fsp27 are two key LD-associated proteins in adipocytes for regulating lipolysis and lipid dynamics, respectively. Cholesterol accumulating on LDs when ACAT1/2 are overexpressed may interfere with the localization of perilipin on LDs, thereby affecting lipolysis. Also, it may affect the functions of other lipid droplet-associated proteins such as CIDEC (Fig. 3J), DGAT2 (Fig. 4, A and D), and HSL (Fig. 5). All the perturbations in adipocytes caused by increased esterification seem to be associated with the proteins coating the LD surface. Hence, this work strongly implies that free cholesterol localizing to the surface of LDs interferes with the functions of these metabolic proteins.
In summary, our current work sheds light on why adipocytes, the cellular fat reservoirs, store so little esterified cholesterol and further highlights the importance of adipocyte cholesterol on lipid droplet dynamics and caveolae function. Our work uncovers a unique aspect in the regulation of cholesterol homeostasis in adipocytes and suggests that increased ACAT proteins and activity may underpin certain adverse metabolic changes under pathological conditions such as extreme obesity. Therefore, ACAT inhibitors may be tested in treating disease conditions associated with extreme obesity.
Experimental procedures
Mice C57BL/6J ob/ob mice and their age-matched WT control mice were maintained at 22 Ϯ 1°C with a 12-h light/dark cycle and ad libitum access to standard rodent chow and water. Epididymal adipose tissue was dissected from male mice (18 -20 weeks old) and immediately snap frozen for later analysis. Ovarian adipose tissue was dissected from female animals (19 -32 weeks old) and immediately used for adipocyte isolation. All procedures were approved by the Garvan Institute/St. Vincent's Hospital Animal Experimentation Ethics Committee and followed guidelines issued by the National Health and Medical Research Council of Australia.
Adipocyte isolation
Ovarian fat pads dissected from mice were minced in 6-cm dishes. Digestion solution was freshly prepared by adding collagenase D (Sigma-Aldrich) (0.75 mg/ml), 0.901 mM CaCl 2 , and 0.493 mM MgCl 2 in Dulbecco's PBS buffer. Minced fat pads were digested using 2 ml of digestion solution in 37°C shaking water bath for 30 min. 10 ml of prewarmed DMEM (Life Technologies) medium was added to suspensions that were filtered through a 100-m cell strainer (Falcon). After centrifugation at 700 ϫ g for 5 min, mature adipocytes and stromal vascular fraction pellets were collected, respectively, for protein extraction.
cDNA constructs
FLAG-tagged human ACAT1 and ACAT2 were subcloned into retroviral vector pBABE-puro using standard procedures. Primers were designed in-frame according to the multiple cloning site of pBABE-puro and are listed in Table S1. The catalytic dead ACAT1/2 (ACAT1-H460A and ACAT2-H360A) and ACAT2 stable mutants (ACAT2-C277A and ACAT2-H360A-C277A) were generated by a two-step site-directed mutagenesis (59). Site-directed mutagenesis primers were designed containing mutations of the corresponding amino acid change and are listed in Table S2. All constructs were verified by Sanger sequencing.
Mammalian cell culture and transfection
HEK293FT and Phoenix Eco cells were cultured in high-glucose DMEM supplemented with 10% FBS (Life Technologies) and 1% penicillin/streptomycin/glutamine (PSG) (Life Technologies). 3T3-L1 cells were cultured in high-glucose DMEM supplemented with 10% newborn calf serum (Life Technologies) and 1% PSG. The cells were incubated at 37°C with 5% CO 2 , and the medium was changed every 2 days. DNA transfection was performed using Lipofectamine TM LTX and Plus reagent (Life Technologies) according to the manufacturer's instructions.
Viral transduction
To generate stable cell lines or to transfect mature adipocytes, viral overexpressing constructs were transduced to target cells. Briefly, 2 ϫ 10 6 Phoenix Eco cells for retroviral overexpression or HEK293FT cells for lentiviral overexpression were seeded in 10-cm dishes 24 h prior to transfection. For retrovirus production 6 g of pBABE-puro plasmids or for lentivirus production 10 g of RRL-PGK plasmids together with 3 g of pRSV (Rev), 2 g of pMD.G (vesicular stomatitis virus), and 4 g of pMDLg/pRRE (Gag/Pol) were transfected into target cells for 48 h. The viral medium was filtered through a 0.45-m filter (Life Technologies) and collected into 15-ml falcon tubes. 8 g/ml Polybrene (Sigma-Aldrich) was added to the filtered viral medium, which was then added to the target cells and incubated for 24 -48 h. For mature adipocytes transduced with lentivirus, the cells were split and reseeded into different dishes depending on the experimental purposes. For the generation of stable cell lines transduced with retrovirus, the viral medium was refreshed to DMEM, 10% newborn calf serum, 1% PSG containing 4 g/ml puromycin (Life Technologies) to select positively infected cells. The cells were selected for 72 h followed by a further 48 h.
Cell treatments
The cells were grown to desired cell confluency using DMEM supplemented with serum and PSG depending on cell types. Insulin
RNA extraction and quantitative real-time PCR
Total RNA was extracted using TRIzol TM reagent (Sigma-Aldrich). Mammalian cells were grown in 6-well plates. The cells were washed with PBS once and then lysed by the addition of 1 ml of TRIzol TM regent and incubation for 5 min at room temperature. 200 l of chloroform (Sigma-Aldrich) was added to the lysates, shaking for 30 times violently, followed by incubation for 5 min at room temperature. The mixture was then centrifuged at 12,000 ϫ g for 15 min at 4°C. 350 l of upper aqueous phase was carefully removed to a fresh tube, and the same volume of isopropanol (Sigma-Aldrich) was added and mixed well by the vortex for 5 s. The mixture was centrifuged at 12,000 ϫ g for 10 min at 4°C. After removing the upper aqueous phase, the pellet was washed twice with 1 ml of 75% ethanol (Sigma-Aldrich). The centrifugation of 7500 ϫ g for 5 min at 4°C was carried out between each wash. The RNA pellet was dried in fume hood after removing ethanol and then dissolved in RNase free water (Life Technologies). RNA concentration and purity were determined using Nanodrop spectrophotometer (Thermo Fisher Scientific). 1 g of RNA was adopted for cDNA synthesis using the high-capacity cDNA reverse transcription kit (Thermal Fisher Scientific). Quantitative RT-PCR was performed with a Rotor-Gene 6000 real-time PCR machine (Qiagen) using KAPA SYBR Green mix (KAPA Biosystems). The mRNA levels were normalized against the housekeeping gene and compared with control samples. All quantitative RT-PCR primers used in this study are listed in Table S3.
Immunoblot analysis
The samples were mixed with 2ϫ Laemmli buffer, incubated for 10 -15 min at 70°C, and then subjected to 10% SDS-PAGE. After electrophoresis, the proteins were transferred to Hybond-C nitrocellulose filters (GE Healthcare). Incubations with primary antibodies were performed at 4°C overnight. Secondary antibodies were peroxidase-conjugated AffiniPure donkey anti-rabbit, donkey anti-mouse, or donkey anti-goat IgG (HϩL; Jackson ImmunoResearch Laboratories) used at a 1:1000 dilution. The bound antibodies were detected by ECL Western blotting detection reagent (GE Healthcare or Merck Mil-lipore) and visualized with Molecular Imager ChemiDoc TM XRSϩ (Bio-Rad).
Filipin staining
The filipin complex (Sigma-Aldrich) was dissolved in DMSO (Sigma-Aldrich), and a working solution of 0.05 mg/ml in PBS containing 10% FBS was used. Cells grown on coverslips were rinsed with PBS three times and then fixed with 4% paraformaldehyde (PFA) (EM Science) for 15 min. After washing three times with PBS, the cells were incubated with 1 ml of glycine (Sigma-Aldrich) in PBS (1.5 mg/ml) for 10 min at room temperature to quench PFA. Then cells were stained with 1 ml of filipin working solution for 2 h in the dark at room temperature. When lipid droplets staining was required, 1 g/ml BODIPY 493/503 (Life Technologies) or HCS LipidTOX TM deep red neutral lipid stain (1:500) (Life Technologies) was supplemented to the filipin working solution at the last 15 or 45 min depending on the dye used. The cells were mounted on the slides after rinsing with PBS three times.
Immunofluorescence
Cells grown on coverslips were fixed with 4% PFA for 15 min and then permeabilized using 0.1% Triton X-100 (Sigma-Aldrich) in PBS for 30 min. The cells were washed with PBS three times and then blocked using 1% BSA (w/v) (Sigma-Aldrich) diluted in PBS for 1 h at 37°C. After blocking, the cells were incubated with primary antibody at 4°C overnight. The primary antibody was diluted in PBS at the ratio of 1:100. The cells were then washed three times in PBS before the incubation with the appropriate secondary Alexa Fluor antibody (Life Technologies) at 37°C for 1 h. The secondary antibody was diluted in PBS at the ratio of 1:500. The coverslips were washed three times for 5 min each time in PBS and then mounted onto slides using ProLong Antifade Gold reagent (Life Technologies) and sealed with nail polish. When filipin staining was required after immunofluorescence, the cells were permeabilized using 0.1% saponin (Sigma-Aldrich). All antibodies were diluted in 0.05% saponin. Confocal microscopy was performed using an Olympus FV1200 laser scanning confocal microscope (Olympus, Tokyo, Japan). A 100ϫ/1.4 oil immersion objective was used for all imaging except for special specifications. The diameters of the LDs were measured using ImageJ software (National Institutes of Health).
Oil Red O staining
3T3-L1 adipocytes grown in 6-well plates were washed with PBS once before fixed with 2 ml of 4% PFA for 1 h at room temperature. After fixing, 2 ml of 100% isopropanol (Ajax FineChem) was added to fixed cells. The plate was swirled to mix well. The mixture was removed and washed with 2 ml of 60% isopropanol (Ajax FineChem). After the removal of isopropanol, the plates were left to dry. 2 ml of Oil Red O (Sigma-Aldrich) solution was added and left to incubate for 10 min. The cells were then immediately washed with MilliQ H 2 O four times until all residual Oil Red O solution was removed. The plates were dried completely before imaging by a scanner.
Cholesterol esterification in adipocytes
Lipolysis assay 3T3-L1 mature adipocytes overexpressing ACATs were split into 96-well plates. The lipolysis assay was carried out using the glycerol release assay kit (Biovision) according to the manufacturer's protocol with minor changes. To induce lipolysis, 75 l of lipolysis assay medium was added to cells for indicated time points with 10 M isoproterenol supplement. Glycerol standard was made using dilutions of glycerol in glycerol assay buffer. 50 l of lipolysis assay medium was used to determine the glycerol released in samples. The reaction mixture was added to glycerol standards and samples and incubated for 30 min at room temperature protected from light. The reaction results were determined using a plate reader at 570 nm. Glycerol released from samples was normalized to the protein concentration of each sample.
Neutral lipid extraction
After washing cells once with PBS, the neutral lipids were extracted by a 2-ml mixture of hexane (Ajax FineChem) and isopropanol (Ajax FineChem) (3:2) for 30 min in the fume hood. The solvent was then transferred into 2-ml glass vials. Another 1 ml of fresh hexane and isopropanol was used to collect the lipid residues in the dish and then transferred to glass vials together with the previous 2-ml solvent. The lipids were dried using a speed vacuum centrifuge. The cells were lysed with 0.1 M NaOH (Ajax FineChem) for 15 min at room temperature after the dish was dried. The protein concentrations were determined by bicinchoninic acid (Thermo Fisher Scientific) assay.
Thin-layer chromatography
Neutral lipids were reconstituted in 60 l of hexane. The samples were then loaded on and separated using a Silica Gel 60 plate (Millipore) and developed in a solvent system consisting of heptane/diethyl ether/glacial acetic acid (90:30:1) (Ajax FineChem). Separated lipids were stained with iodine for ϳ15 min. The TLC plate was scanned using the Epson Perfection 4490 Photo, and TAG bands were quantified using ImageJ software and normalized to the protein concentration.
Cholesterol and cholesteryl ester measurement
Cholesterol was extracted using the same method as the neutral lipid extraction. The Amplex TM Red cholesterol assay kit (Life Technologies) was used to check the levels of free cholesterol and cholesteryl esters according to the manufacturer's protocol. Cholesterol-containing samples were diluted in an appropriate volume of 1ϫ reaction buffer. Free cholesterol was measured by the enzyme-coupled reaction without cholesterol esterase. The cholesteryl esters were hydrolyzed into free cholesterol by cholesterol esterase and were then detected by subtracting free cholesterol from total cholesterol. The reactions were incubated for 30 min at 37°C protected from light before measuring fluorescence using a microplate reader.
Cholesterol esterification assay
3T3-L1 preadipocytes stably overexpressing ACAT1/2 were seeded in 6-cm dishes and grown to ϳ60 -70% confluency. The cells were washed once using PBS, incubated with 2 ml of cholesterol-free medium (LPDS medium) overnight. The starvation medium was replaced by DMEM supplemented with 20% FBS for 5 h. [ 14 C]Oleic acid (1 Ci) conjugated to BSA was then directly added to the medium, and the cells were chased for further 2 h. The cells were washed twice using buffer A (50 mM Tris-HCl, 150 mM NaCl, 0.2% (w/v) BSA, pH 7.4) and once with buffer B (50 mM Tris-HCl, 150 mM NaCl, pH 7.4). The liquid residues were removed completely. Lipid extraction and TLC were carried out as described above. The TLC plate was exposed to a BAS-MS imaging sheet (Fujifilm, Tokyo, Japan) for 5-7 days in an enclosed cassette at room temperature before visualizing the cholesteryl ester band using the FLA-5100 phosphorimaging device (Fujifilm). The relative intensities of bands corresponding to cholesteryl esters were quantified using ImageJ.
Statistical analysis
All data were expressed as means Ϯ S.D. or means Ϯ S.E. Comparisons between two groups were analyzed using twotailed Student's t test or one-way analysis of variance using GraphPad Prism 6.0 software. Differences at values of p Ͻ 0.05 were considered to be significant. | 8,458.8 | 2019-11-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
Plato, Aristotle & the Dialectics of Poetry
The present paper attempts at estimating the legacy of two of the seminal philosophical minds, Plato and Aristotle. Their ideas have been so instrumental in shaping western critical literary tradition that any discussion on literary theory and criticism has to have them as a point of reference. Plato’s negative conception of mimesis is juxtaposed with Aristotle’s affirmative stand. The paper also examines the various philosophical and pragmatic charges labelled against poetry by Plato in his works such as Republic, Phaedrus and Ion. The paper concludes with a general overview of critical responses to Plato by succeeding men of letters.
INTRODUCTION
The process of poetic creation has since long been a matter of constant debate in the literary theoretical circle. Since classical antiquity, this idea has been subjected to frequent negotiations. Any diachronic investigation on literary theory is inevitably concerned with questions relating to the moral and social agency of art. The earliest discussions on the ontological essence and epistemological dimensions of art is inexorably linked to poetry. Where does poetry come from? What is its social and cultural validity? These questions are the common threads which bind literary thinkers from Plato to Aristotle, from Horace to Dryden or from Shelley to Eliot. In fact, the Platonic-Aristotelian debate about mimesis (Greek for imitation), on which much of modern literary theory rests, is more concerned with determining the ontological essence of art than anything else.
II. PLATO, ARISTOTLE AND THE STATUS OF POETRY
We know that Plato had an aversion for art, though he admired Homer, because he believed, that a work of art being imitative has a corrupting influence, it miseducates the ideal citizens of the ideal society.
For the Athenian sage, the greatest human potential was the quest for truth and as humans one should strive for it. Much of Plato's negative conception of mimesis stems from his Theory of Forms. He believed in the existence of a parallel universe, different from the physical world we live in. He calls the ideal world the 'World of Being' as opposed to the 'World of Becoming', the corporeal world of change, decay and death.
For Plato, everything in our physical world, from objects to ideas, is but a representation or mimesis of the unchanging originals, the forms of those objects or ideas which exist in the real world, the World of Being.
When a poet writes a poem about time, for instance, he bases his poem on the concept of time that exists in the World of Becoming rather that the ideal form or concept of time that exists in the World of Being. Since poets write about objects and ideas from the physical world, which are themselves an imitation of their forms in the unchanging ideal world of being, hence, poetry is twice removed from the world of formstwice removed from reality. This being-becoming dichotomy is at the very heart of Plato's dialectics.
Plato's negative conception of poetry is apparent in book X of his Republic. It is where he propounds his famous theory of forms which raises questions on the moral agency of poetry. His dialogues also raise few e-ISSN : 2620 3502 p-ISSN : 2615 3785 International Journal on Integrated Education Volume 3, Issue IX, September 2020 | 2 pertinent questions on the social function of poetry. In his Phaedrus Plato anticipates Freudian notion of the tripartite nature of the human psyche. He believed that human psyche is not a singular or monolithic entity, rather, it is divided into three parts, the rational side, the irrational and the mediating side. He consummates this idea by giving a beautiful and illustrative metaphorical parallel.
III. MAIN PART
Human mind, Plato says, is like a chariot driven by two winged horses, one sane and the other insane, steered and controlled by the charioteer. Poetry, being fanciful and counterfeited appeals to the irrational side of the human psyche, unlike mathematics, natural sciences and philosophy which are apprehended by means of our cerebral faculty, thereby making the readers emotionally vulnerable and weak. Further, in his Ion Plato tactfully concludes that poetry is a kind of madness or contagion. He presents a beautiful analogy of a magnet with series of iron rings attached one after another. Just as the magnetic current flows from the magnet into one ring leading to the other, the divine frenzy of poetry passes down from God to the poet to the rhapsode and then to the audience. Plato quotes, "For the poet is a light and winged and holy thing, and there is no invention in him until he has been inspired and is out of his senses, and the mind is no longer in him: when he has not attained to this state, he is powerless and is unable to utter his oracles." Further, he says, "…that the poets are only the interpreters of the Gods by whom they are severally possessed" In short, for Plato poetry is bad because it is a result of divine possession causing absolute delirium.
IV. ANALYSES
Plato's ideas have been so pervasive and far ranging that a thinker aptly describes that all of western philosophy is but a series of footnotes to Plato. Such has been his influence as a man of great acumen. His ideas have greatly helped in shaping modern disciplines from literature to politics, psychology to law, philosophy to logic and so on. In the context of literary theory, Plato has to be seen as a beginning, the originary source or, to use a word oft repeated, the logos. Every form of literary theory, be it the pragmatic orientation of Aristotle or Longinus or the objective approach of Eliot or Brooks, is in one way or the other a response to Plato's notion of mimesis, social usefulness of poetry and the moral agency of the poet.
Aristotle's notion of mimesis differs significantly from that of Plato's. Poetry for Plato was bad for various metaphysical and psychological reasons discussed above. Aristotle vitalized and energized the poetic process as something positive and invigorating. He raised the status of poetry which had received a cogent blow from Plato's dialectics. The mimetic process, Aristotle claims, is a positive, natural and powerful tool.
As children we learn through imitation. Even as grown-ups we share a propensity for aesthetic appreciation of art. In fact, it is the very mimetic process which enhances the aesthetic quality of an object or idea that may otherwise in reality be ugly or unpleasurable. Who likes war? Nobody does. But we do appreciate a painter's pictorial representation of war, for that matter even bloodshed. For Aristotle, the imitative process is what makes literature truer than history. While history is fact-based or incidental, literature is imaginative, philosophical and transcendental in nature. It defies temporal and spatial bounds. Shakespeare, for instance, 'is not for an age'. If history is characterized by incidental fidelity and particularity, literature is rendered unique by its universality. This positive conception of the mimetic process has been Aristotle's eternal legacy to the field of literary studies. Aristotle also negates the Platonic notion that art, particularly poetry appeals to the weaker, inferior side of human psyche thereby making them emotionally vulnerable. In chapter VI of Poetics he not only defines tragedy (by which he meant poetic drama of Periclean Athens) from an ontological perspective but also discusses the impact it has on the ideal audience. Tragedy, for Aristotle, "is an imitation of an action that is serious, complete and of certain magnitude; in language embellished with each kind of artistic ornaments;
CONCLUSIONS
Literature progresses and evolves along such conflicting standpoints, so has been the story of poetry.
In every epoch poetry has been and will be seen with a sense of suspicion. As the world becomes more technological and scientific the doubts will certainly intensify. What place do poets occupy in this modernday complex world order? Is poetry becoming redundant and useless? Such questions loom large and do make plausible claims too. However, poetry continues to be written and appreciated, and so does the debate. The perfectly all right to be an engagé writer as long as you don't think you're changing things. Art is our chief means of breaking bread with the dead . . . but the social and political history of Europe would be exactly the same if Dante and Shakespeare and Mozart had never lived." As discussed earlier, the process of poetic creation has been a much-contested issue from Plato to Eliot and beyond. However, poetry continues to be written and consumed and in that process criticism thrives. Poetry may not change the historical or political trajectory of the world, as Auden says, but the study of it certainly charters the marvelous imaginative course of mankind.. | 2,039.8 | 2020-09-04T00:00:00.000 | [
"Philosophy"
] |
Statistically Thinned Array Antennas for Simultaneous Multi-Beam Applications
Statistically thinned array antennas are usually employed to form single-beam radiation patterns. In this work, the possibility to adopt such type of antennas to obtain multiple-beam patterns is successfully explored. In particular, two schemes are proposed and compared. In the first one, multiple-beam patterns are realized by considering each beam corresponding to a different feeding network. In the second scheme, multiple-beam behavior is achieved by a single feeding network. A key question addressed in this manuscript is given by the analysis of the statistical deviation of the synthesized radiation pattern, as compared to the reference one. To this end, the up-crossing method is employed. In particular, the assumption of symmetric thinned arrays leads to analytical results, but avoids the adoption of the simplified hypothesis which usually give inaccuracy. The proposed approach is verified by a Monte Carlo analysis, and shows very good agreement between empirical data and theoretical predictions.
I. INTRODUCTION
S TATISTICALLY thinned arrays give a type of random array obtained by removing/turning off some elements from the so-called reference filled array, according to a probabilistic law which depends on the amplitude taper of the original reference array [1]- [4]. This type of array is appealing, as it requires a reduced number of elements (with respect to the reference array) to achieve the same resolution, with the peak side-lobe level mainly influenced by the residual elements after thinning operation. Moreover, no amplitude-tapering is needed, so that T/R modules can be used in their optimised configuration [1] [5]. Thinned arrays can be usefully adopted in a variety of applications, including satellite communications, radio-astronomy, groundbased high frequency radars, and interference cancellation by adaptive beam-forming. They can be generally adopted in all applications primarily requiring high resolution and low secondary lobes, rather than high gain [6]- [8]. They could be also exploited in the framework of mm-waves communications [9]- [11].
Thinning operation can be typically realized by the adoption of specific optimisation procedures [12]- [18]. In spite of better performance, these approaches generally entail a high computational cost, which can become cumbersome for large antenna arrays [8].
In this paper, statistically thinned arrays (STA) are considered. They are treated in terms of excitation coefficients typically given by binomial random variables. Other schemes assuming different levels for excitations have been also proposed in the literature [19]. Even if they provide a better approximation for the desired array factor, a more complicated feeding network is however required by this latter strategy.
The array factor of STA is a stochastic process which needs to be characterised by resorting to the probability theory. A statistical characterization is relatively easier in the presence of a high number of antenna elements, when the Central Limit Theorem (CLT) can be applied [3]. This is just the case where the reduction in the number of elements is more relevant. An accurate statistical characterization of a thinned array is generally difficult to obtain. Nonetheless, an a priori estimation of the array pattern (such as in terms of the peak side-lobe level) as a function of the array features and the thinning level, is highly desirable. To satisfy this need, several results have been produced over the years in literature, starting from the papers [1] and [3]. Recently, in [20], more accurate results have been presented for the case of symmetric thinned arrays. In particular, the up-crossing method has been used, but avoiding simplified assumptions which degrade the accuracy.
Statistically thinned arrays are generally adopted for single-beam applications, with possible linear phase excitation if a beam steering feature is required. However, many practical cases exist which impose the presence of simultaneous multiple beams [6]. Therefore, in this contribution, STA is applied to realize multiple-beam patterns. Two schemes are proposed. In the first one, each beam is associated to a different feeding network; hence, simultaneous independent beams can be obtained. The second scheme relies on a single feeding network, so the beams are no longer independent. In both cases, the approach developed in [20] is adopted to estimate the achievable performance in terms of two parameters, namely the array factor variance and the "distance" error between the statistical array factor and the reference one. The former is a local measure of the array factor dispersion around the reference array; the latter provides a "global" metric. In particular, by assuming symmetric STA, the mentioned performance parameters are obtained analytically, even if they account for the non-stationarity of the array factor, often neglected in other studies. Monte Carlo analysis is applied to successfully validate the theoretical predictions. Furthermore, we are dealing with the array factor only and no mutual coupling is assumed between antenna elements [21]. In any case, since thinned arrays are usually obtained from periodic lattices, they still allow a more adequate control of mutual coupling than aperiodic arrays. [4].
The work is organised as follows. Section II contains the fundamental concepts on single-beam statistically thinned arrays. In Section III, the two thinning schemes for multiplebeam array factors are outlined, while in section IV they are tested and validated by numerical analysis. Conclusions and potential future developments are finally reported. In addition, the paper includes an appendix section to support the theoretical derivations.
II. SYMMETRIC STATISTICALLY THINNED ARRAYS
For the sake of argument, we briefly report some basic concepts regarding statistically thinned arrays. Let us consider a linear array of N isotropic radiators arranged along the x axis within the segment [−L/2, L/2], L being the array aperture in terms of wavelength (refer to Fig. 1). N is assumed to be even, and the elements are half-wavelength spaced at x n = −x −n = 0.25 + (n − 1)0.5, with n = [1, 2, ..., N/2] (i.e., there is no element in the position x = 0). Moreover, the amplitude coefficients are chosen so that A −n = A n . The corresponding array factor can be written as where u = cos θ and u 0 = cos θ 0 , with θ and θ 0 being the observation and the steering angles, respectively. Accordingly, the visible space is given by the interval is the so-called reference filled array factor, to be approximated by the thinned array. Herein, we address the case of symmetric thinned arrays, which are obtained by thinning only half the array, and then, for each remaining element, by locating further elements in the position −x n . The corresponding thinned array factor is given by [20] F (u) = 2C where {F n } N/2 n=1 are independent Bernoulli random variables. In particular, P r {F n = 1} = 1 − P r {F n = 0} = F n = p n (P r {·} is a probability measure, while the term F n represents the mean of F n ). Also, 0 ≤ p n = α A n / max n {A n } ≤ 1, with 0 < α ≤ 1 being the thinning factor. Indeed, for α = 1, natural thinning is obtained. C = max n {A n }/α. Note that, since an uniform arrangement has been considered, F (u) is a periodic function.
Since the {F n } N/2 n=1 are random variables, F (u) is a stochastic process whose mean and variance are respectively given as [20] where P (u) = F 2 (u) is the power-pattern of the (symmetric) thinned array, and P (u) gives its mean.
If the number N is sufficiently large, by virtue of the Lyapunov Central Limit Theorem [22], F (u) is Gaussian (for each u), that is F (u) ∼ N F ref (u), σ 2 (u) . Furthermore, since F (u) is periodic, µ(u) and σ(u) are periodic as well [20]. Under these conditions, the cumulative distribution function (cdf ) of the array factor magnitude (and consequently of the power-pattern) is easily found to be expressed as [20] [23] [24] [25]. It can be shown that eq. (5) can be written in a closed-form [23], by exploiting the condition that for positive arguments the Qfunction can be written in a closed-form with very small errors [25].
It must be remarked that the above simple results are achieved due to the assumption of symmetric configurations. For general asymmetric arrays, the distribution of the array factor magnitude can be expressed in terms of a generalised non-central chi-square distribution with two degrees of freedom [26]. In this case, no closed-form can be obtained. Anyway, symmetric and asymmetric thinned arrays return the same mean array factor and the average of N t . Furthermore, the difference in the achievable performance are not so relevant, as shown in [20].
III. MULTI-BEAMS STATISTICALLY THINNED ARRAYS
In this section, two schemes for obtaining a thinned array factor consisting of multiple beams are introduced. Basically, they are obtained by adapting the general statistical thinning approach outlined in the previous section. For convenience, such schemes are addressed in the sequel as scheme 1 and scheme 2.
The starting point is the definition of the multiple-beam reference array factor with B n = M m=1 e −2jπxnum . It is seen that F ref (u) consists of M identical beams which are steered at the directions u m . Since A −n = A n (and real) and B −n = B * n , it is useful to arrange (6) as withà n = A n |B n | and −ϕ n = ∠B n , or equivalently as A n [a n cos(2πx n u) + b n sin(2πx n u)] A. SCHEME 1 By this scheme, thinning is achieved as follows It is noted that by this scheme all the M beams share the same random coefficients {F n } N/2 n=1 , which are the same as in (2). This means that the actual excitation coefficients pertaining to the multiple-beam reference array factor have not been employed in defining the binomial random variables and that, simply, all the beams are thinned in the same way. This scheme, however, allows to obtain independent steerable beams, in the sense that each beam can correspond to a different chain of phase shifters and therefore to a different signal [27]. Hence, this scheme can be exploited for simultaneous transmission of multiple signals.
The mean and variance of the above array factor are and with P M1 (u) being the power-pattern related to scheme 1.
. Moreover, the probability distribution of N t is the same as the single-beam case. Therefore, by this thinning scheme, introducing additional beams does not change the distribution of the actual number of antenna elements that remains after the thinning. Finally, as in the classical single-beam case, the array factor "statistically" tends to the reference one when the number of elemental radiators increases. VOLUME 4, 2016 B. SCHEME 2 In this case the multi-beam nature of the array factor is directly considered in thinning procedure. More in detail, the binomial random variablesF n are now set according to the amplitude coefficientsà n in (7), that is P r {F n = 1} = F n =p n = αà n / max{à n } = 1 − P r {F n = 0}. The resulting thinned array factor hence writes as in whichC = max{à n }/α. This scheme can be obtained by feeding the antenna elements with a single chain of phase shifters, which provide the N phases {ϕ n }. Therefore, while M × N t phase shifters are needed for scheme 1, here, only N t = 2 N/2 n=1F n phase shifters are required, with N t being still a Gaussian random variable with mean 2 N/2 n=1p n and variance 4 What is more, for scheme 2 the average number of active radiators depends on the number of beams. Also scheme 2 allows to obtain multi-beam array factors without amplitude tapering, though the excitation coefficients are different from the ones pertaining to scheme 1. However, by this scheme, the beams are not independent.
F M2 (u) has again a Gaussian distribution with mean and variance
C. GLOBAL CHARACTERISATION
A rough array factor characterization, and hence a comparison between the two schemes, can be given in terms of the distribution of the array factor magnitude, which, as remarked above, is easy to obtain for the symmetric case. However, the mean and variance of the array factor provides only local information, i.e., for each different value of u. A global metric, instead, should be linked to the distance (in a probabilistic sense) between the actual and the reference array factors. To this end, as in [20], we consider the normalised standardised error, for scheme 1 and i = 2 for scheme 2). In particular, performance is estimated in terms of ϵ(u) magnitude supremum (with u implied) (17) is equivalently recast as Equation (17) gives a measure of the error over the whole visible space, including the beam regions. Indeed, P r {S ≤ ξ} = p% entails that, with a probability of p%, ϵ(u) lies be- can be considered as generalised p-percent level curves [20].
It can be verified that the magnitude of the coefficient of variation [28], |CV (u)| = |σ Mi (u)/µ Mi (u)|, is relatively higher in the region of secondary lobes. Accordingly, it is expected that (18) is mainly contributed by the error in such a region, whereas Finding a closed-form solution for the S-distribution is a very complicated problem. However, for the symmetric thinned arrays under concern, the up-crossing method can be conveniently employed and it has proved to work remarkably well [24].
Let N ξ be the number of times |ϵ(u)| up-crosses (i.e., crosses with positive slope) the level ξ. Accordingly, a first result is P r {S ≤ ξ} = 1 − P r {N ξ ≥ 1} ≥ 1 − N ξ (assuming that |ϵ(u)| is below ξ at u = −1), where the Markov inequality, P r {N ξ ≥ 1} ≤ N ξ has been exploited and N ξ is the mean of the number of up-crossings. Hence, a lower bound for the S-distribution is obtained. However, if N ξ is modeled as a Poisson random variable [30], the Sdistribution as can be analytically estimated as [24] where P r {|ϵ(−1)| ≤ ξ} can be calculated from (5). Here, the final crucial issue is the computation of N ξ . This can be achieved as follows [22] in which f |ϵ||ϵ| ′ (ξ, γ; u) is the joint probability density function of |ϵ(u)| and its first derivative. Since ϵ(u) is a real stochastic process, it follows that the determination of the up-crossings of |ϵ(u)| is equivalent to the simultaneous study of the up-crossings of ϵ(u) and −ϵ(u). Moreover, since ϵ(u) is a Gaussian process [3], then ϵ(u) and its derivative ϵ ′ (u) = dϵ(u)/du are jointly Gaussian [22] (of course, the same holds true for −ϵ(u) and its derivative). Eventually, (20) can be written as (see [20] for details) This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication.
is the standard deviation of dF Mi (u)/du (see Appendix), σ ′ Mi (u) = dσ Mi (u)/du. Note that in (21), we exploited the Bravais-Pearson correlation coefficient between ϵ(u) and its derivative is zero, for each u [20].
It is worth stressing once again the role of symmetric arrays assumption. As shown above, for the case at hand, S-distribution can be accurately determined and is relatively simple to compute since only a one-dimensional integration is required to obtain N ξ . Also, P r {|ϵ(−1)| ≤ ξ} can be calculated in an extremely simple way. The same does not hold true for asymmetric thinned arrays. In fact, in this case, to get tractable expressions, the real and imaginary part of ϵ(u) and their derivatives are considered as being four independent stationary Gaussian processes, with the real and imaginary part of ϵ(u) (resp. the real and imaginary part of ϵ ′ (u)) having the the same variance [31]. These are strong assumptions that generally lead to unreliable results [23].
IV. NUMERICAL ASSESSMENT
In this section, a numerical analysis is presented to check the theoretical findings and compare the proposed statistically thinned array schemes.
To this end, each realisation (sample function [22]) of the stochastic thinned array factor is obtained by employing a sample step in the variable u of 1/(10L), which is 5 times finer than the sampling step required by the bandwidth of the power pattern. In particular, 2000 realisations are employed in the following examples.
Each beam of the reference array factor is obtained by sampling a Taylor n-bar current distribution with n = 5 and side-lobe level equal to −25 dB [21]. Thus, the coefficients {A n } are related to the samples of the corresponding current distribution [4]. Furthermore, as stated above, elemental radiators are half-wavelength spaced.
In order to check the behavior of statistically thinned arrays as the number of beams varies, we consider four reference array factors. The first one consists of a single (that is, M = 1) Taylor beam centered at u = u 1 = 0, for which a (1) This single-beam case can be considered as a kind of touchstone for the other cases. The second case concerns two Taylor beams pointing at u 1 = 0 and u 2 = 0.5, respectively, which corresponds to set VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and The third reference array factor presents three Taylor beams at u 1 = 0, u 2 = 0.5 and u 3 = −0.
We would like to point out that here we just consider linear arrays for the sake of simplicity and for the computational burden purposes. However, the derived theoretical tools and the proposed thinned models can be easily generalised to deal with more general curved arrays. Also, they can be used for planar arrays while considering the cuts of the stochastic array factors, with a similar methodology as done in [1].
A. SCHEME 1
We consider three cases. Case 1 refers to N = 200 and α = 1 (natural thinning), case 2 to N = 200 and α = 5/7 (thinning at 50% percent) and case 3 to N = 280 and α = 5/7 (average number of active elements equal to that of case 1). Although N may seem excessive for linear arrays, it is worth remarking that in the literature linear thinned arrays with elements till 2 × 10 4 [3] have been considered in order to study the properties of statistically thinned arrays.
Results concerning case 1 are shown Fig. 2. In this natural thinning case the average number of active elements is equal to 70% of the maximum number N = 200. Fig. 2a shows the magnitudes (in dB) for the four reference array factors discussed above and the magnitudes of realisations of thinned array factors, all normalised with respect to their supremum. As can be seen, the side-lobes of |F (u)|/max{|F (u)|} increase with the number of beams. This trend was already observed for random aperiodic arrays [24] [29]. However, the main lobes of the actual and reference array factors are very similar. The above results are consistent with the variance behaviours reported in Fig. 2b. Indeed, as the number of beams increases, the variance levels become higher. This entails a greater dispersion around the actual reference array. Fig. 2c shows the comparison between the empirical and the theoretical S-distributions (obtained by exploiting eq.(19)). As can be seen, those curves almost overlap and hence the theoretical estimation works very well. It is interesting also to understand how the performance changes when the average of the actual number of elements, N t , varies having fixed N or when the average of N t is fixed and N increases. The next two examples just allow to shed some light on this question.
For case 2 a more severe thinning is imposed so that the average number of radiators after the thinning is lower than in the previous case. Indeed, now the mean of N t is equal to 100, whereas in the previous case it was 140. Results concerning this case are reported in Fig. 3. As expected, previous considerations still apply. However, now the side-lobe level is in general increased; this is consistent with the variance behaviours that are higher than the corresponding previous cases. This basically confirms that the actual number of elements plays a crucial role in controlling the dispersion of the array factor realisations, [1] [20] [23]- [26]. The point is that such a dispersion can be precisely estimated through the analytical S-distributions (see Fig. 3c).
In the last example (case 3), the maximum number of antenna elements is increased at N = 280, while the thinning is kept at 50%. This way, the mean of N t is the same as case 1. Results are shown in Fig. 4. While the trend is of course the same as the previous cases, it is seen that the performance is in between the one of case 1 and 2 (see, for example, the variance behaviours in 4b). This entails that the achievable performance is not only affected by the average number of elements in the array (after the thinning), as it is often stated in the literature. However, our theoretical estimations of S-distributions works remarkably well and hence gives a general tool to foresee the performance by accounting for all the relevant problem parameters.
As a further validation, scheme 1 was also tested using CST Studio Suite. In particular, Fig. 5 shows the comparison between the CST return and the theoretical directivity, in the azimuth plane, of a linear array of cylindrical halfwavelength dipoles arranged parallel to the z axis of the orthogonal Cartesian system, while the array axis coincides with the x axis. Moreover, N = 200, α = 1, M = 4, the operating frequency is 1 GHz and the spacing between antenna elements is half a wavelength. As can be seen, an excellent matching is observed.
B. SCHEME 2
For the analysis of scheme 2, we consider two cases, N = 200 and N = 280, both under natural thinning conditions. Note that now, the average number of active radiators depends not only on α but also on the number of beams (in general, on the current distribution). Figs. 6 and 7 report the results for such cases. It is seen that, as N increases, the performance improves. As remarked above, this is a general trend that holds true also for scheme 2. However, with respect to scheme 1, by comparing Figs. 2 and 6, it is seen that a worsening occurs. Hence, scheme 1 is better but requires a more complex overall feeding network.
C. DISCUSSION
From previous results, it can be noted that the S-distributions look nearly identical, regardless of the number of beams and the average number of retained elemental radiators. This, of course, does not mean that the distance between the statistically thinned array and the reference one is the same in all the cases. This is because S-distributions refer to standardised processes. Indeed, what they actually measure is the probability that the array factor is globally within a strip (around the reference one) that depends on the stan-dard deviation, σ Mi (u), which in turn depends on the case under consideration. Also, as argued above, since the error is mainly related to the side-lobe regions, the S-distribution gives an estimation of the peak level of secondary lobes.
Statistically thinned arrays are particularly suited for large antenna arrays which are populated by a high number of elements. In these cases, the statistical dispersion around the reference array factor can be made very low [3]. In this regard, we point out that, while the considered linear arrangement is chosen for computational convenience, the 8 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and theory and the results can be easily adapted to deal with the one-dimensional cuts of a two-dimensional array factor.
Finally, in order to give a general picture of the achievable performance, the results shown above are summarized in Table 1. This table reports the rounded mean value of the number of active elements, N ti , and the normalised standard deviation of the array factor, averaged over u, for scheme 1 and i = 2 for scheme 2). The addressed cases are distinguished by indicating (N, α). Looking at the table, the following conclusions can be drawn: • For a given thinning factor, the variance decreases as the number of active elements increases; • For a given average number of active elements, natural thinning performs better (compare σ M1 for the cases (200, 1) and (280, 5/7); • While for scheme 1 the expected value of the number of active elements is always the same (regardless of the number of beams), this is not the case for scheme 2; • With the same number of beams, N and α, scheme 1 is more efficient than scheme 2; • σ Mi allows to roughly estimate the peak level of the secondary lobes. Concerning the last point, consider, for example, the case in Fig. 2. Here, the highest level of the secondary lobes, for the (200, 1)-two beams sub-case (of scheme 1), is about −15 dB. Hence, the peak side lobe level is actually in between 2.5 × σ M1 = 2.5 × 0.0574 −→ −16.86 dB and 4 × σ M1 = 4 × 0.0574 −→ −12.78 dB (see Table 1 for the value of σ M1 ). Since P r {S ≤ 2.5} ≈ ϵ (with ϵ being a very small positive real number) and P r {S ≤ 4} ≈ 1, 2.5 × 0.0574 and 4 × 0.0574 could be seen as the (statistical) minimum and maximum value of the highest level of the secondary lobes, respectively, if σ Mi can be considered nearly constant for u ∈ [−1, 1]. A more precise characterization can be obtained by considering lower and upper level curves, that is LC(u) = 2.5 × [σ Mi (u)/ max{|F DES M (u)|}] and UC(u) = 4 × [σ Mi (u)/ max{|F DES M (u)|}], that with probability almost equal to 1, contain the peak of secondary lobes for u ∈ [−1, 1].
It is worth mentioning that Table 1 also reports cases with N = 5000, borrowed from [4], which clearly show that performance improves with the number of radiators.
V. CONCLUSION
Statistically thinned arrays have usually been studied for single-beam array factor, considering only a linear phaseshift for beam steering.
Here, we have introduced two statistically thinned array schemes for simultaneous multiple-beam generation. In particular, we have analytically characterized the achievable performance in terms of how the resulting array factor statistically deviates from the reference one. To this end, the array factor variance, which gives local information, and the VOLUME 4, 2016 1: (Rounded) expected value of active elements, N ti , and mean value (with respect to u) of the normalised standard deviation of the array factor, σ Mi , relative to the examples shown above (i = 1 for scheme 1, i = 2 for scheme 2). The vector (200, 1) means N = 200 and α = 1 and the same holds for the other vectors. The acronym n.b. stands for number of beams. supremum of the standardised error magnitude, which gives global information (i.e, over the whole visible space), have been derived and linked to the parameters of the problem, such as number of elements in the reference array, level of thinning, number of beams, etc.
The numerical analysis showed that increasing the number of beams leads to an increase of the distance between the thinned and the reference array factors. Whereas, performance improves as the number of active antenna elements increases. What is more, is that the theoretical findings are in excellent agreement with the outcome of the Monte Carlo numerical analysis. Hence, the obtained results can be actually employed as a tool to set in advance the thinning strategy. In particular, the number of beams can be determined beforehand by checking if it is compatible with the desired performance. .
APPENDIX A
For scheme 1, it is easy to prove that the mean of the derivative of the array factor, F ′ M1 (u) = dF M1 (u)/du, is µ | 6,682.2 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
Wave-momentum shaping for moving objects in heterogeneous and dynamic media
Light and sound waves can move objects through the transfer of linear or angular momentum, which has led to the development of optical and acoustic tweezers, with applications ranging from biomedical engineering to quantum optics. Although impressive manipulation results have been achieved, the stringent requirement for a highly controlled, low-reverberant and static environment still hinders the applicability of these techniques in many scenarios. Here we overcome this challenge and demonstrate the manipulation of objects in disordered and dynamic media by optimally tailoring the momentum of sound waves iteratively in the far field. The method does not require information about the object’s physical properties or the spatial structure of the surrounding medium but relies only on a real-time scattering matrix measurement and a positional guide-star. Our experiment demonstrates the possibility of optimally moving and rotating objects to extend the reach of wave-based object manipulation to complex and dynamic scattering media. We envision new opportunities for biomedical applications, sensing and manufacturing.
Ever since the emergence of optical tweezers [1,2], the non-contact manipulation of objects using electromagnetic [3,4] and acoustic waves [5][6][7] has become a central paradigm in quite diverse fields ranging from optomechanics to bio-acoustics.Sound waves, in particular, offer distinct advantages, as they are bio-compatible and harmless, while their short wavelengths can penetrate a wide range of heterogeneous, opaque, and absorbing media.Another key feature of acoustics is its wide frequency range, spanning from Hertz to Gigahertz, which facilitates the manipulation of particles varying in size from a few centimeters to a few micrometers.In this way, not only Mie [8][9][10] and Rayleigh particles can be addressed, but also complex objects including individual biological cells [11][12][13].
While various strategies have already been developed to collectively or selectively manipulate objects and particles, these techniques always rely on controlled and static environments.Collective dynamic positioning of particles trapped in the potential wells of a pressure field has been achieved in 1D [14], 2D [15,16] or 3D [17][18][19][20].Typically, by generating appropriate standing waves [21], particles or objects are trapped either on the pressure nodes or antinodes, depending on their contrast ratio with the surrounding fluid [13].More advanced strategies have also been developed to address the selectivity problem of standing-wave-based trapping, involving acoustic vortices [22], or the use of additional systems such as lenses [23], metasurfaces [24] or holograms [18,[25][26][27][28]. Considerable attention has also been paid to the development of on-chip acoustofluidic and acoustophoretic de-vices [12,[29][30][31] and wave-controlled micro-robots [32][33][34][35][36][37][38] for lab-on-a-chip and biomedical applications.However, the requirement for precisely controlled static environments and proximity to the target significantly restricts the applicability of these various techniques in many real-world scenarios.Practical cases involve disordered or dynamic environments where manipulation must occur at a considerable distance from the object that needs to be manipulated.
Here, we propose and experimentally demonstrate a wave momentum shaping approach, which only requires far-field information and allows us to move and rotate objects even in disordered or dynamic environments.Instead of relying on potential wells to trap the object, we continuously find and send the optimal mode mixture that transfers an optimal amount of momentum to the object.This mode mixture is updated during the motion as the scattering changes.The method is experimentally demonstrated in a macroscopic 2D acoustic cavity containing a movable object and a collection of scatterers.Far-field scattering matrix measurements allow us to determine the optimal wavefronts for shifting or rotating the object at each moment in time.Remarkably, the method neither requires the knowledge or modelling of acoustic forces nor any prior information on the physical properties of the object or disorder.Only a guidestar measurement of the object's position or rotation angle is needed, which is here provided by a camera.The remarkable robustness of the method is emphasized by implementing it in a dynamic scenario, where the scatterers composing the environment move randomly.The method may be transposed to other platforms and scales, such as ultrasounds or light for the motion of microscopic bodies.1. Moving an object in a complex scattering medium by acoustic wave-momentum shaping.a, We consider a parallel plate acoustic waveguide supporting 10 modes at the working frequency and containing cylindrical rigid scatterers (in black).The bottom surface of this waveguide is formed by the water in this container, allowing a spherical object to float and move freely (orange ball).Wave momentum shaping consists of finding and sending, at each time tm, the optimal mode mixture to push the ball along an arbitrarily chosen path (orange line).We achieve this by real-time far-field measurements, allowing us to track the evolution of the scattering matrix S as the object moves, deducing the wavefronts to be injected by the external speaker arrays to optimally deliver the target momentum.b, Example of a scattering matrix measured at a given time tm in our experiment.c, Difference between the scattering matrix at tm and the one measured at tm-1, showing the influence of a small object translation on scattering.We use the information collected at three consecutive time steps to derive the mode mixture that optimally pushes the ball in the desired direction.The static scatterers are later replaced with dynamic ones.
Principles of wave-momentum shaping
The idea of wave-momentum shaping is inspired by recent developments in adaptive optics and disordered photonics, where wavefront shaping techniques have been significantly advanced to focus light in disordered media or to compensate aberrations and multiple-scattering for various purposes [39][40][41][42][43].In the most straightforward implementation, a feedback mechanism allows a quantity of interest, such as the optical power focused at a given point, to be iteratively optimized by tuning the incident wavefronts [39].On the other hand, more advanced concepts, such as Wigner-Smith operators derived from a system's scattering matrix, have provided ways to focus light in disorder optimally [44], exert a maximal electromagnetic force or torque on static objects [45,46], or potentially even cool an ensemble of levitated particles [47,48].Wave-momentum shaping applies these ideas to the manipulation of moving objects, combining the optimal character of Wigner-Smith approaches with iterative guidestar techniques, necessitated by the dependence of the S matrix on the object position, which influences the complex scattering process, constantly modifying the field speckle.
Consider the experimental setup illustrated in Fig. 1a, consisting of an acoustic multimode waveguide support-ing ten modes (N = 10) at the operational frequency f 0 = 1590 Hz (audible sound).In the central part, we introduced a movable object (ping-pong ball) with a radius of 20 mm (≈ 0.1λ 0 ), which floats on the surface of a water tank.Within this tank, multiple static cylindrical scatterers (depicted as black cylinders) protrude above the water level, thus creating a complex scattering landscape.Two arrays of ten speakers are placed on both sides, labeled 1 and 2, allowing us to control the incident acoustic mode mixtures | Ψ (1,2) in ⟩.These incident waves are linearly scattered into outgoing mode mixtures | Ψ (1,2) out ⟩, which can be measured using microphones placed in the waveguide's asymptotic regions (see Methods and Supplementary Information).From such measurements, it is possible to deduce the scattering matrix S(t), which evolves with time because, as the target object moves, it modifies the scattering occurring in the central region.This dynamic scattering matrix obeys the relation | Ψ out (t)⟩ = S(t) | Ψ in (t)⟩, where we gathered the states related to both sides into single column vectors.Figure 1b shows an example of the measured scattering matrix, of dimension 2N × 2N, which is composed of four N × N sub-blocks, the reflection and transmission matrices r (1), (2) and t (1), (2) , describing how each of the ten Experimental demonstration of object guiding through acoustic wave-momentum shaping in a static scattering medium.a, A set of points, in blue, are chosen to define an overall S-shaped path to be followed by the moving ball, whose successive positions captured by a camera are shown by orange disks.The ball successfully reaches each blue point, where the S matrix is measured.Crucially, these checkpoints are chosen to zigzag about the S-shaped path, so that the last three consecutive measurements contain optimal information on the gradient of the S matrix with respect to the ball coordinates.b, Net momentum imparted to the ball at three different times on the path (black arrow), and its decomposition over the modes injected from the two sides.Note that these are not derived from the ball dynamics, but rather inferred from the momentum expectation value of the injected Wigner-Smith eigenstates.Remarkable agreement with the actual direction of the ball velocity, reported in panel a (black arrows), is observed.See supplementary movie SM1.
modes scatter on each side.The figure encodes the amplitude of the matrix coefficients in the transparency of the squares, and the phase in their color.Clearly, mode mixing occurs due to the presence of complex scattering.Quite intuitively, to make object manipulation possible, such a scattering matrix must depend on the position of the movable object.This dependence is evidenced by repeating the scattering matrix measurement after slightly moving the object (by a distance equal to a quarter of its radius, i.e., l = 5 mm), and plotting the difference in Fig. 1c.We observe that while the changes are small in magnitude, consistent with the fact that only one scatterer is moved, some information about the object motion seem to be embedded in scattering phase changes.
The S matrix's dependence on the object's position is relevant in the context of its dynamic manipulation.If we denote by α either the x or y coordinate of the movable object, the momentum transferred to it upon scattering ∆p α can be calculated via the expectation values of the operator C α = -i∂/∂α for the superposition states | Ψ in,out ⟩ [45].The momentum transferred onto the particle upon scattering is the difference between the momentum of the outgoing and incident mode mixtures in the vicinity of the particle.Assuming unitary scattering (S † S = 1), one can demonstrate the link between this momentum transfer and variation of S with α (Methods): The Hermitian operator Q α = -iS -1 dS/dα is known as a generalized Wigner-Smith operator [44].Equation ( 1) means that the momentum imparted locally onto the moving object upon scattering is related to the expectation value of Q α for the specific input state | Ψ in ⟩ in the far field.A direct consequence of Eq. ( 1) is that if the input state | Ψ in ⟩ is chosen to be an eigenvector of Q α , the momentum kick on the object will be proportional to its eigenvalue.Therefore, choosing the eigenstate with the highest eigenvalue as the input mode mixture will optimize the transfer of momentum to the object in the direction α.This is the basic physical principle behind wave-momentum shaping.
Linear-momentum transfer
We first apply wave-momentum shaping to the transfer of linear momentum, and experimentally demonstrate complete control over the trajectory of a moving object in a complex scattering medium, which is static for now.We start from the set-up of Fig. 1 and apply an iterative motion algorithm that works as follows: (i) Initially, the object is at rest.We send three random wave fields to move it slightly but randomly, and measure the Smatrix at three different nearby points, whose positions are measured by the camera; (ii) From these measurements, we estimate the components dS/dα of the gradient of S with respect to the coordinates α = x, y, using discrete derivative approximations; (iii) We compose Q α and diagonalize it to obtain mode mixtures and momentum expectations (eigenvalues) in α = x, y; (iv) We send a superposition of eigenvectors of Q x and Q y in proportion, to move the object in a desired direction and measure S again once the object has moved; (v) the process is iteratively repeated based on the last three measured S matrices until the object arrives at the desired destination.The method does not require calibration or access to the interior of the medium.
Figure 2a demonstrates the successful guiding of an object within a disordered medium using acoustic wavemomentum shaping.Several snapshots of the moving ball are blended into one picture to illustrate better the path followed by the moving scatterer.A video recorded by our camera can be found in Supplementary movie SM1.Remarkably, the acoustic fields injected from the far field are able to continuously move the floating ball through a chosen S-shaped trajectory within the disordered medium (the total path length is around four λ 0 ).It is worth noting that the path is discretized into intermediate checkpoints (blue disks) arranged in a zig-zag manner about the S-shaped trajectory to enable a good estimation of the S matrix gradient (see Section 1.6 and Fig. S6 of S.I.).Note that the object is not trapped, but moved by successive acoustic pushes much like a hockey player guiding a puck.
To illustrate the contribution of each mode and in which sense the input state at each time step is optimal, Fig. 2b compares at three distinct times the momentum expectation value of the input superposition (black arrow) with the ones of its individual modes components alone (colored arrows).It is clear that each mode contributes to pushing the ball in the correct final direction, and the total push is due to a collective action of all modes.We note that some modes do not push the ball exactly in the desired direction.Yet, this mixture is optimal given the constraints on the wave spatial degrees of freedom imposed by the disordered medium at this specific location.We also compare the total momentum expectation (black arrows in Fig. 2b), which is a theoretical prediction, to the actual velocity of the ball (black arrows in Fig. 2a), which is an experimental observation.The remarkable agreement between the directions of the expected momentum push, and those of the measured velocity confirms that we successfully implemented our wave-momentum shaping strategy.We conclude that the unavoidable absorption losses present in any experiment, which alter the fields' amplitudes more than their phases, does not significantly influence the direction of the momentum push predicted by the unitary theory.
The interested reader will find other path instances in Supplementary movie SM2.
Angular-momentum transfer
An advantage of the variational principle presented above is that α is not restricted to be the x or y coordinate, but can be any observable target parameter influencing the scattering.A relevant example we consider in the following is the rotation angle θ of an object.This choice will allow us to create an acoustic motor and rotate objects from a distance, by sending audible sound.Consider the angular-momentum transfer on a rotating object constructed from three balls glued together, and placed on a fixed rotation axis at its center, located within the disordered medium (Fig. 3a).The instantaneous scattering matrix S(t m ) is here measured at consecutive time instances t m with 20 degrees angle step, harnessing the angular momentum operator Q θ , and providing a way to induce optimal transfer of torque from the field to the object.Figure 3b reports the experimentally measured value of θ as a function of time.In this experiment, we first selected eigenvectors of Q θ with positive eigenvalues, consistent with the counter-clockwise rotation initially observed during the experiment (blue-shaded part of the figure).Then, we abruptly switched to input states with negative eigenvalues (red shaded part).The observation of a reversal of the rotation direction, reported in Fig. 3b, is thus consistent with theoretical expectations (Supplementary Movie SM3).
Manipulating in dynamic disorder
Since the manipulation method is based on real-time measurements of the instantaneous scattering matrix, nothing prevents the scattering environment from changing in time as well.To demonstrate this, we added other floating balls (similar to the moving target ball) inside the cavity.This experiment and its goal are illustrated in Fig. 4a.The ball we want to control is the orange one, while the blue ones are the added balls, which are anchored with light strings to prevent any collision with the target ball.The blue balls carry small metallic nuts, which allows us to randomize their motion by varying in time the magnetic field inside the cavity.We wish to control the path of the orange ball and make it follow a shape FIG. 5. Experimental field scans around the moving object.We measured the distribution of acoustic pressure amplitudes associated with the optimal transfer of linear momentum along the x and -y directions (panels a and b, respectively).The case of the rotating object is illustrated in c and d for clockwise and counterclockwise rotations, respectively.The cylindrical scatterers are represented by black disks, and the moving object is outlined with dashed circles.With our approach, we can create the optimal speckle field allowed by the scattering medium, automatically creating the best possible hot spot next to the object in order to achieve each prescribed momentum kick.
that looks like a period of a sine function (dashed blue line).Panel b shows the measured successive positions of all balls during the experiment.Contrary to the blue balls, which move unpredictably, the orange ball closely follows the pre-designed sinusoidal path.The deviations of the orange ball from this target path, shown as a blue line in c, are tiny.For comparison, we also plot as a red line the average distance browsed by the blue balls away from their initial position, which fluctuates more strongly in magnitude and speed, underlining the extreme control that we can maintain on the trajectory of the target.A video showing the robustness of the manipulation in the dynamically changing random medium in Fig. 2c is provided as Supplementary Movie SM4.
Acoustic pressure field maps
We provide another point of view on wave-momentum shaping by probing the acoustic pressure field in the vicinity of the moving object (Methods).For this purpose, we exert the optimal linear momentum push of a ball along different directions, such as +x, or -y, and rotations in opposite directions, and measure the acoustic field map.From the displayed pressure profiles (Fig. 5ad), we see that the speckle field tends to create hot spots of acoustic pressure to push the object in the right direction.Conversely, experimental pressure distributions for the eigenvalues with the smallest absolute value of the corresponding Wigner-Smith operator exhibit no hot spot near the particle and tend to put it in a silent zone (Supplementary Fig. S8).Note that input states that are optimal for motion along x can still exhibit a nonzero expectation value in the y direction.However, they remain the most efficient at pushing along x.Therefore, combining eigenvectors to control expectation values in unwanted directions may provide a way to further refine the algorithm and the precision of the motion.Sometimes the object finds itself in a location in the medium where it can't be pushed in the right direction given the available degrees of freedom in the speckle.This is not a problem since these points are isolated, and the algorithm will make the object catch up with the trajectory at the next points.To conclude, it is striking to observe how the optimal pressure field can be prepared around the object without knowing anything about the involved wave-matter interaction, nor about the object's shape or environment.This is a clear advantage of the present method when compared to conventional methods based on trapping.
CONCLUSION
In this work, we report the experimental control of an object's translation and rotation in a complex and dynamic scattering medium through the concept of wavemomentum shaping.An iterative manipulation protocol, based solely on the knowledge of the far-field scattering matrix of a system and a position guidestar, enables the optimal transfer of linear and angular momentum from an acoustic field for object manipulation within both static and dynamic disordered media.The dynamically injected wavefronts generate the optimal field speckle near the object to be moved, much like a hockey player guiding a puck, in order to produce successive momentum kicks.This method is of potential traps, robust against disorder, and tolerates that the surrounding medium changes in time throughout the manipulation.Remarkably, the method is rooted in momentum conservation and does not require any knowledge of the object to be manipulated, but only a guidestar measurement of its position.In addition, it does not require any modelling of interaction forces, making the protocol very general and broadly applicable to many real-life scenarios (including different waves, scales, objects, etc.).Future efforts will focus on developing methods for objects of various sizes, for example by transposing the concept to ultrasonic frequencies for the manipulation of smaller objects, as well as extensions to the control of multiple objects.For this, we note that the frequency degree of freedom could also be leveraged.We conclude that the method seems particularly promising for micromanipulation and acoustofluidic applications such as tissue engineering [49,50], biological analysis [12], and drug delivery [51,52], among others.
Experimental setup
The setup consists of a water-filled tank (Figs. 1 a and S1 a) coupled at the top to a two-dimensional air waveguide terminated at both ends by anechoic terminations.The water tank's width, length, and height are 100 × 100 × 3 cm, respectively.The 2D acoustic waveguide above it has a width of 104 cm, a length of 180 cm, and a height of 8 cm.Two columns of 10 ICP® microphones (PCB 130F20, 1/4 inch) separated by 5 cm are placed between the tank and the anechoic terminations on each side to measure the complex pressure field distribution inside the waveguide.Two columns of 10 amplified loudspeakers (Monacor MSH-115, 4 inches, with in-house amplifiers) are placed horizontally on each side's bottom of the air waveguide to ensure the efficient excitation of all ten modes inside the acoustic structure.The generation of incident wave states and the acquisition of the corresponding far-field scattering are made using an FPGA Speedgoat Performance Real-Time Target Machine controller (I/O 135, sampling rate 10 kHz) with 40 inputs and 20 outputs.The moving target scatterer is a ping-pong ball (diameter 4 cm and weight 4.17 g) floating on the water's surface.The static disorder scatterers are plastic cylinders of various diameters (2 to 4 cm) immersed in the water tank and partially surfaced without reaching the upper part of the 2D air waveguide.The cylinders and the ball are waxed (coated with candle wax, i.e., a hydrophobic film) to prevent the ball from sticking to the static scatterers by capillarity.The real-time position of the ball (Fig. S4) is captured by an ultra-wide Logitech Brio webcam, working in Full HD resolution (1920 × 1080pixels) and high refresh rate (60 frames per second) mode.The moving target is placed at the initial (starting) position with the help of a small iron nut glued to it and the electromagnet attached to the mechanical arm (Fig. S1 b), which is moved in a volume (1000 × 1000 × 110 mm) above the water tank by three high-precision linear stages (Newport® IMS stages with displacement error < 0.05 mm).
The rotating object of Fig. 3 comprises 3 ping-pong balls glued together in a line and placed on a fixed needle in its center to prevent linear displacement and only allow rotation while limiting friction.The balls forming it are painted with different patterns to facilitate the detection of the instant angle value.
To create the dynamic scattering medium, we used ten ping-pong balls and glued small metallic nuts on them.The scattering balls, evenly positioned around the intended paths, are attached to the bottom of the water tank by 3-to 8-cm-long nylon threads.The random fluctuations of these scatterers are then exacerbated by randomly moving the mechanical arm over the balls while randomly switching the state of the attached electromagnet.The disorder scatterers are placed at a significant enough distance from the target scatterer, which is still free-floating, to avoid any collision.
Finally, the top plates above the water tank are replaceable with carefully designed perforated plates (holes with a diameter of 1 mm and forming a square array with a period of 10 mm, Fig. S1 e to allow scanning the field inside the waveguide from a microphone placed on the robotic arm, which is located outside the waveguide.
Scattering matrix measurement
The complex scattering matrix S relates the incoming with the outgoing flux-normalized modes through a set of 2N linearly independent equations (2N = 20 is the total number of propagative modes, 10 from each side): Ψ out = S • Ψ in .Solving the scattering matrix S requires measuring N independent wave mode distributions excited by a combination of speakers that form an orthogonal basis.Our experiment uses an orthonormal basis where only one speaker is excited at a time with a 1590 Hz harmonic signal.For each excitation, the data collected by the microphone arrays on both sides can be used to determine the incident and outgoing modes Ψ in,out .With the hardware we used, this takes about 80 ms.Therefore, after 2N orthogonal excitations (1.6 s), the scattering matrix is solved for that particular scattering configuration.Such raw scattering matrix is neither perfectly symmetric, nor unitary, and is subsequently regularized, by discarding its very small antisymmetric part and rescaling its subunitary eigenvalues, keeping their phases (see Fig. S3).
Construction of Generalized Wigner-Smith Operators
The construction of the GWS operator Q α is based on gradient approximations, which require successive measurements of the scattering matrix S(t) at three different positions (time).
To derive the translation GWS operators Q x and Q y for the ball at position (x m , y m ) and time instance t m , we need, in addition to the scattering matrix S m measured at the actual position, the scattering matrices S m-1 and S m-2 measured at the two previous time-instances t m-1 , t m-2 , when the ball was located at coordinates (x m-1 , y m-1 ) and (x m-2 , y m-2 ) respectively.
With these three matrices S m , S m-1 , and S m-2 , the gradient of S can be derived with the following approximation formulae (2) With the gradient estimated, the construction of the GWS operators Q x , Q y is direct and reads as The error in the gradient approximation and, therefore, in the operators Q x and Q y depends on the shape of the triangle formed by the three measurement points, with the best results obtained for an equilateral triangle and worse for a flat scalene one (see Fig. S6).Detailed analysis of the triangle's influence on the derivation of GWS is provided in Supplementary Information.Therefore, for the best manipulation of the object position, the moving path is drawn with a zig-zag line to minimize the error in the GWS operators.Similarly, the rotation GWS operator Q θ requires the measurement of S m , S m-1 , and S m-2 , for three consecutive vane angles θ m , θ m-1 , and θ m-2 , taken at time t m , t m-1 , and t m-2 .
The gradient approximation is in that case derived with a backward three-point derivative where δθ is the angle difference between the time instance t m and t m-1 .
The GWS operator constructed to control the rotation of the vanes therefore reads as follows
Injection of optimal input mode mixtures
As explained in the main text, finding the optimal mode mixture to be injected to give the optimal momentum push to the object follows from Eq. ( 1).We, therefore, provide a short proof for this important equation.
For a particle in free space, the change of momentum transferred to it upon scattering ∆p α can be calculated via the expectation values of the operator C α = -i∂/∂α for the superposition states | Ψ in,out ⟩ In Refs.[44,45], it was shown that this relation, which is Eq. ( 1) in the main text, continues to hold even when the target particle is embedded in a scattering environment.In this way, the momentum push expected for a given far-field input is expressed as the expectation value of the generalized Wigner-Smith operator Q α , which is Hermitian.Therefore, the optimal momentum push is provided by the eigenvector of Q α with the highest eigenvalue.
Having measured the Wigner-Smith operators, we diagonalize them and find the eigenvector with the highest eigenvalue, and use them to calculate the optimal mode mixture to be injected to give the optimal momentum push to the particle.For example, if we want to move the object by ∆x and ∆y: (i) we diagonalize Q x and Q y ; (ii) we obtain their eigenvectors with highest eigenvalues, Ψ x,y , with eigenvalues calculated as δx and δx; (iii) we construct the optimal input state ∆x δx Ψ x + ∆x δx Ψ y .This input state is multiplied with the coupling coefficients matrix of the speakers M, obtaining the required voltage amplitudes and phases required on each speaker (see Fig. S2 and Fig. S5).In practice, to determine the direction we want to go, we measure the position of the ball (x, y) at a given time (Fig. S4) and compare it with the position of the next checkpoint on the trajectory, which we try to reach up to a certain threshold distance before moving on to the next checkpoint.
FIG.1.Moving an object in a complex scattering medium by acoustic wave-momentum shaping.a, We consider a parallel plate acoustic waveguide supporting 10 modes at the working frequency and containing cylindrical rigid scatterers (in black).The bottom surface of this waveguide is formed by the water in this container, allowing a spherical object to float and move freely (orange ball).Wave momentum shaping consists of finding and sending, at each time tm, the optimal mode mixture to push the ball along an arbitrarily chosen path (orange line).We achieve this by real-time far-field measurements, allowing us to track the evolution of the scattering matrix S as the object moves, deducing the wavefronts to be injected by the external speaker arrays to optimally deliver the target momentum.b, Example of a scattering matrix measured at a given time tm in our experiment.c, Difference between the scattering matrix at tm and the one measured at tm-1, showing the influence of a small object translation on scattering.We use the information collected at three consecutive time steps to derive the mode mixture that optimally pushes the ball in the desired direction.The static scatterers are later replaced with dynamic ones.
FIG.2.Experimental demonstration of object guiding through acoustic wave-momentum shaping in a static scattering medium.a, A set of points, in blue, are chosen to define an overall S-shaped path to be followed by the moving ball, whose successive positions captured by a camera are shown by orange disks.The ball successfully reaches each blue point, where the S matrix is measured.Crucially, these checkpoints are chosen to zigzag about the S-shaped path, so that the last three consecutive measurements contain optimal information on the gradient of the S matrix with respect to the ball coordinates.b, Net momentum imparted to the ball at three different times on the path (black arrow), and its decomposition over the modes injected from the two sides.Note that these are not derived from the ball dynamics, but rather inferred from the momentum expectation value of the injected Wigner-Smith eigenstates.Remarkable agreement with the actual direction of the ball velocity, reported in panel a (black arrows), is observed.See supplementary movie SM1.
FIG. 3 .
FIG.3.Experimental demonstration of object rotation by acoustic angular-momentum shaping in static scattering media.a, We use audible sound to rotate an object constructed from three balls glued together in a disordered medium.First, we move the target in the counter-clockwise direction (left part of the figure), and then abruptly switch its direction of rotation (right part).At each step (10 degrees), we extract from a far-field S matrix measurement the Wigner-Smith operator with respect to the rotation angle θ, which allows us to send the wavefront with maximal angular-momentum transfer.b, Measured angle versus time, confirming the rotation of the object, in the anti-clockwise then clockwise directions.
FIG. 4 .
FIG. 4. Moving a specific object among a dynamic ensemble of scatterers experiencing random motion.a, We let all scatterers move freely under fast external perturbations, and wish to control the trajectory of the orange ball, guiding it on a sinusoidal path.The blue balls have a metallic nut glued to them, allowing us to randomize their motion by applying fast magnetic perturbations with a moving external electromagnet.They are loosely anchored to the ground by strings to avoid any collision with the orange ball.b, Experimental trajectories measured by a camera, demonstrating the successful control of the ball trajectory even in this extreme dynamic scenario.c, Comparison between the measured deviation of the target ball center from the intended sinusoidal path (blue), and the large fluctuations of the other scatterers from their initial positions (red).
The controller is programmed by Matlab/Simulink® to generate the proper acoustic wavefronts by controlling the voltage of 20 loudspeakers and to acquire voltages produced by 40 microphones corresponding to the pressure signals.PCB Piezotronics 483C05 sensor conditioners are used to pre-condition the microphone signals.Examples of captured signals and post-processing are shown in Fig. S2. | 7,628.6 | 2023-12-04T00:00:00.000 | [
"Physics",
"Engineering"
] |